Guest Post: The Case Against Requiring Social Media Companies to Proactively Monitor for ‘Anti-Judiciary Content’

This post is authored by Dhruv Bhatnagar

Through an order dated July 19, 2022 (“Order”), Justice G.R. Swaminathan of the Madras High Court initiated proceedings for criminal contempt against YouTuber ‘Savukku’ Shankar. The genesis of this case is a tweet in which Shankar questioned who Justice Swaminathan met before delivering a verdict quashing criminal proceedings against another content creator. Shankar’s tweet on Justice Swaminathan has been described in the Order as ‘an innuendo intended to undermine the judge’s integrity’.

In the Order, Justice Swaminathan has observed that Chief Compliance Officers (“CCOs”) of social media companies (“SMCs”) are obligated to ensure that “content scandalising judges and judiciary” is not posted on their platforms “and if posted, [is] taken down”. To contain the proliferation of ‘anti-judiciary content’ on social media, Facebook, Twitter, and YouTube have been added as parties to this case. Their CCOs have been directed to document details of complaints received against Shankar and explain whether they have considered taking proactive steps to uphold the dignity of the judiciary.

Given that users access online speech through SMCs, compelling SMCs to exercise censorial power on behalf of State authorities is not a novel development. However, suo moto action to regulate ‘anti-judiciary content’ in India may create more problems than it would solve. After briefly discussing inconsistencies in India’s criminal contempt jurisprudence, this piece highlights the legal issues with standing judicial orders directing SMCs to proactively monitor for ‘anti-judiciary content’ on their platforms. It also catalogues the practical difficulties such orders would pose for SMCs and argues against the imposition of onerous proactive moderation obligations upon them to prevent the curtailment of users’ freedom of speech.

Criminal contempt in India: Contours and Splintered Jurisprudence

The Contempt of Courts Act, 1971 (“1971 Act”) codifies contempt both as a civil and criminal offence in India. Civil contempt refers to wilful disobedience of judicial pronouncements, whereas criminal contempt is defined as act(s) that either scandalise or lower the authority of the judiciary, interfere with the due course of judicial proceedings, or obstruct the administration of justice. Both types of contempt  are punishable with a fine of up to Rs. 2,000/-, imprisonment of up to six months, or both. The Supreme Court and High Courts, as courts of record, are both constitutionally (under Articles 129 and 215) and statutorily (under Section 15 of the 1971 Act) empowered to punish individuals for contempt of their own rulings.

Given that “scandalis[ing]” or “tend[ing] to scandalise” a court is a broad concept, judicial interpretation and principles constitute a crucial source for understanding the remit of this offence. However, there is little consistency on this front owing to a divergence in judicial decisions over the years, with some courts construing the offence in narrow terms and others broadly.

In 1978, Justice V.R. Krishna Iyer enunciated, inter-alia, the following guidelines for exercising criminal contempt jurisdiction in S. Mulgaokar (analysed here):

  • Courts should exercise a “wise economy of use” of their contempt power and should not be prompted by “easy irritability” (¶27).
  • Courts should strike a balance between the constitutional values of free criticism and the need for a fearless judicial process while deciding contempt cases. The benefit of doubt must always be given since even fierce or exaggerated criticism is not a crime (¶28).
  • Contempt is meant to prevent obstruction of justice, not offer protection to libelled judges (¶29).
  • Judges should not be hypersensitive to criticism. Instead, they should endeavour to deflate even vulgar denunciation through “condescending indifference…” (¶32).

Later, in P.N. Duda (analysed here), the Supreme Court restricted the scope of criminal contempt only to actions having a proximate connection to the obstruction of justice. The Court found that a minister’s speech assailing its judges for being prejudiced against the poor, though opinionated, was not contemptuous since it did not impair the administration of justice.

However, subsequent judgments have not always adopted this tolerant stance. For instance, in D.C. Saxena (analysed here), the Supreme Court found that the essence of this offence was lowering the dignity of judges, and even mere imputations of partiality were contemptuous. Later, in Arundhati Roy (analysed here), the Supreme Court held that opinions capable of diminishing public confidence in the judiciary also attract contempt. Here, the Court noted that the respondent had caused public injury by creating a negative impression in the minds of the people about judicial integrity. This line of reasoning deviates from Justice Krishna Iyer’s guidelines in Mulgaokar, which had advised against using contempt merely to defend the maligned reputation of judges. Not only does this rationale allow for easier invocation of the offence of contempt, but it is also premised on a paternalistic assumption that India’s impressionable citizenry may be swayed by malicious and irrelevant vilification of the judiciary.

Given the above disparity in judicial opinions, Shankar’s guilt ultimately depends on the standards applied to determine the legality of his tweet. As per the Mulgaokar principles, Shankar’s tweet may not be contemptuous since it does not present an imminent danger of interference with the administration of justice. However, if assessed according to the Saxena or Roy standard, the tweet could be considered contemptuous simply because it imputes ulterior motives to Justice Swaminathan’s decision-making.

It is submitted that the Mulgaokar principles more closely align with the constitutional requirement that restrictions on speech be ‘reasonable’ as the principles advocate only restricting speech that constitutes a proximate threat to a permissible state aim (contempt of court) set out in Article 19(2). For this reason, as general practise, it may be advisable for judges to consistently apply and endorse these principles while deciding criminal contempt cases.    

Difficulties in proactive regulation of ‘anti-judiciary content’

Justice Swaminathan’s observation in the Order that SMCs have a ‘duty to ensure content scandalising judges is not posted, and if posted is taken down’ suggests that he expects such content to be proactively identified and removed by SMCs from their platforms. However, practically, standing judicial orders imposing such broad obligations upon SMCs would not only exceed their obligations under extant Indian law but may also lead to legal speech being taken down. These concerns are elaborated below:

Incompatibility with legal obligations:

Although the Information Technology Act, 2000 does not specifically require SMCs to proactively monitor content, an obligation of this nature has been introduced through delegated legislation in Rule 4(4) of the 2021 IT Rules. This rule requires SMCs qualifying as ‘significant social media intermediaries’ (“SSMIs”) (explained here) to, inter-alia, “endeavour to deploy” technological measures to proactively identify content depicting rape, child sexual abuse or identical content previously disabled pursuant to governmental or judicial orders. However, ‘anti-judiciary content’ is not a content category which SSMIs need to endeavour to proactively identify. Thus, any judicial directions imposing this mandate upon them would exceed the scope of their legal obligations.

Further, in Shreya Singhal (analysed here), the Supreme Court expressly required a court order determining the illegality of content to be passed before SMCs were required to remove the content. However, if proactive monitoring obligations are imposed, SMCs would have to identify and remove content on their own, without a judicial determination of legality. Such obligations would also undermine the Court’s ruling in Visakha Industries (analysed here), which advised against proactive monitoring to prevent intermediaries from becoming “super censors” and “denud[ing] the internet of it[s] unique feature [as] a democratic medium for all to publish, access and read any and all kinds of information” (¶53).

Unrealistic expectations and undesirable content moderation outcomes:

Judicial orders directing SMCs to proactively disable ‘anti-judiciary content’ essentially require them to objectively and consistently enforce standards on criminal contempt on their platforms. This may be problematic considering that the doctrine of contempt emerging from constitutional courts, where judges possess a significantly higher degree of specialised knowledge on what constitutes contempt of court, is itself  ambiguous at best. Put simply, when even courts have regularly disagreed on the contours of contemptuous speech, it may be problematic to expect SMCs to take more coherent decisions.

A major risk with delegating the burden of complex decision-making about free speech to private intermediaries is excessive content removal. Across jurisdictions, platform providers have erred on the side of caution and over-removed content when faced with potential legal risks. This is evidenced through empirical studies on the notice-takedown regime for copyright infringing content in the US and due diligence obligations for intermediaries in India.

Given their documented propensity for over-compliance, directions by Indian courts requiring SMCs to proactively takedown ‘anti-judiciary content’, may incentivise excessive removal of even permissible critique of judicial actions by SMCs. This would ultimately restrict social media users’ right to free expression.

Way forward

Considering the issues outlined above, it may be advisable for the Madras High Court to refrain from imposing proactive monitoring obligations upon SMCs. Consistent with the Mulgaokar principles, judges should issue blocking directions for online contemptuous speech, in exercise of their criminal contempt jurisdiction, only against content which poses a credible threat to the obstruction of justice and not against content which they perceive to lower their reputation. Such directions should also identify specific pieces of content and not impose broad obligations on SMCs that may ultimately restrict free expression.

CCG’s Comments to the Ministry of Electronics & Information Technology on the proposed amendments to the Intermediary Guidelines 2021

On 6 June 2022, the Ministry of Electronics and Information Technology (“MeitY”), released the proposed amendments for Part 1 and Part II of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”). CCG submitted its comments on the proposed amendments to the 2021 IT Rules, highlighting its key feedback and key concerns. The comments were authored by Vasudev Devadasan and Bilal Mohamed and reviewed and edited by Jhalak M Kakkar and Shashank Mohan.

The 2021 IT Rules were released in February last year, and Part I and II of the Guidelines set out the conditions intermediaries must satisfy to avail of legal immunity for hosting unlawful content (or ‘safe harbour’) under Section 79 of the Information Technology Act, 2000 (“IT Act”). The 2021 IT Rules have been challenged in several High Courts across the country, and the Supreme Court is currently hearing a transfer petition on whether these actions should be clubbed and heard collectively by the apex court. In the meantime, the MeitY has released the proposed amendments to the 2021 IT Rules which seek to make incremental but significant changes to the Rules.

CCG’s comments to the MeitY can be summarised as follows:

Dilution of safe harbour in contravention of Section 79(1) of the IT Act

The core intention behind providing intermediaries with safe harbour under Section 79(1) of the IT Act is to ensure that intermediaries do not restrict the free flow of information online due to the risk of being held liable for the third-party content uploaded by users. The proposed amendments to Rules 3(1)(a) and 3(1)(b) of the 2021 IT Rules potentially impose an obligation on intermediaries to “cause” and “ensure” their users do not upload unlawful content. These amendments may require intermediaries to make complex determinations on the legality of speech and cause online intermediaries to remove content that may carry even the slightest risk of liability. This may result in the restriction of online speech and the corporate surveillance of Indian internet users by intermediaries. In the event that the proposed amendments are to be interpreted as not requiring intermediaries to actively prevent users from uploading unlawful content, in such a situation, we note that the proposed amendments may be functionally redundant, and we suggest they be dropped to avoid legal uncertainty.

Concerns with Grievance Appellate Committee

The proposed amendments envisage one or more Grievance Appellate Committees (“GAC”) that sit in appeal of intermediary determinations with respect to content. Users may appeal to a GAC against the decision of an intermediary to not remove content despite a user complaint, or alternatively, request a GAC to reinstate content that an intermediary has voluntarily removed or lift account restrictions that an intermediary has imposed. The creation of GAC(s) may exceed Government’s rulemaking powers under the IT Act. Further, the GAC(s) lack the necessary safeguards in its composition and operation to ensure the independence required by law of such an adjudicatory body. Such independence and impartiality may be essential as the Union Government is responsible for appointing individuals to the GAC(s) but the Union Government or its functionaries or instrumentalities may also be a party before the GAC(s). Further, we note that the originator, the legality of whose content is at dispute before a GAC, has not expressly been granted a right to hearing before the GAC. Finally, we note that the GAC(s) may lack the capacity to deal with the high volume of appeals against content and account restrictions. This may lead to situations where, in practice, only a small number of internet users are afforded redress by the GAC(s), leading to inequitable outcomes and discrimination amongst users.

Concerns with grievance redressal timeline

Under the proposed amendment to Rule 3(2), intermediaries must acknowledge the complaint by an internet user for the removal of content within 24 hours, and ‘act and redress’ this complaint within 72 hours. CCG’s comments note that 72-hour timeline to address complaints proposed by the amendment to Rule 3(2) may cause online intermediaries to over-comply with content removal requests, leading to the possible take-down of legally protected speech at the behest of frivolous user complaints. Empirical studies conducted on Indian intermediaries have demonstrated that smaller intermediaries lack the capacity and resources to make complex legal determinations of whether the content complained against violates the standards set out in Rule 3(1)(b)(i)-(x), while larger intermediaries are unable to address the high volume of complaints within short timelines – leading to the mechanical takedown of content. We suggest that any requirement that online intermediaries address user complaints within short timelines could differentiate between types of content that are ex-facie (on the face of it) illegal and causes severe harm (e.g., child-sex abuse material or gratuitous violence), and other types of content where determinations of legality may require legal or judicial expertise, like copyright or defamation.

Need for specificity in defining due diligence obligations

Rule 3(1)(m) of the proposed amendments requires intermediaries to ensure a “reasonable expectation of due diligence, privacy and transparency” to avail of safe harbour; while Rule 3(1)(n) requires intermediaries to “respect the rights accorded to the citizens under the Constitution of India.” These rules do not impose clearly ascertainable legal obligations, which may lead to increased compliance burdens, hamper enforcement, and results in inconsistent outcomes. In the absence of specific data protection legislation, the obligation to ensure a “reasonable expectation of due diligence, privacy and transparency” is unclear. The contents of fundamental rights obligations were drafted and developed in the context of citizen-State relations and may not be suitable or aptly transposed to the relations between intermediaries and users. Further, the content of ‘respecting Fundamental Rights’ under the Constitution is itself contested and open to reasonable disagreement between various State and constitutional functionaries. Requiring intermediaries to uphold such obligations will likely lead to inconsistent outcomes based on varied interpretations.

Transparency reporting under the Intermediary Guidelines is a mess: Here’s how we can improve it

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) represents India’s first attempt at regulating large social media platforms, with the Guidelines creating distinct obligations for ‘Significant Social Media Intermediaries’ (“SSMIs”). While certain provisions of the Guidelines concerning SSMIs (like the traceability requirement) are currently under legal challenge, the Guidelines also introduced a less controversial requirement that SSMIs publish monthly transparency reports regarding their content moderation activities. While this reporting requirement is arguably a step in the right direction, scrutinising the actual documents published by SSMIs reveals a patchwork of inconsistent and incomplete information – suggesting that Indian regulators need to adopt a more comprehensive approach to platform transparency.

This post briefly sets out the reporting requirement under the Intermediary Guidelines before analysing the transparency reports released by SSMIs. It highlights how a focus on figures coupled with the wide discretion granted to platforms to frame their reports undermines the goal of meaningful transparency. The figures referred to when analysing SSMI reports pertain to the February-March of 2022 reporting period, but the distinct methodologies used by each SSMI to arrive at these figures (more relevant for the present discussion) has remained broadly unchanged since reporting began in mid-2021. The post concludes by making suggestions on how the Ministry of Electronics and Information Technology (“MeitY”) can strengthen the reporting requirements under the Intermediary Guidelines.   

Transparency reporting under the Intermediary Guidelines

Social media companies structure speech on their platforms through their content moderation policies and practices, which determine when content stays online and when content is taken down. Even if content is not illegal or taken down pursuant to a court or government order, platforms may still take it down for violating their terms of service (or Community Guidelines) (let us call this content ‘violative content’ for now i.e., content that violates terms of service). However, ineffective content moderation can result in violative and even harmful content remaining online or non-violative content mistakenly being taken down. Given the centrality of content moderation to online speech, the Intermediary Guidelines seek to bring some transparency to the content moderation practices of SSMIs by requiring them to publish monthly reports on their content moderation activities. Transparency reporting helps users and the government understand the decisions made by platforms with respect to online speech. Given the opacity with which social media platforms often operate, transparency reporting requirements can be an essential tool to hold platforms accountable for ineffective or discriminatory content moderation practices.  

Rule 4(1)(d) of the Intermediary Guidelines requires SSMIs to publish monthly transparency reports specifying: (i) the details of complaints received, and actions taken in response, (ii) the number of “parts of information” proactively taken down using automated tools; and (iii) any other relevant information specified by the government. The Rule therefore covers both ‘reactive moderation’, where a platform responds to a user’s complaints against content, and ‘proactive moderation’, where the platform itself seeks out unwanted content even before a user reports it.

Transparency around reactive moderation helps us understand trends in user reporting and how responsive an SSMI is to user complaints, while disclosures on proactive moderation shed light on the scale and accuracy of an SSMI’s independent moderation activities. A key goal of both reporting datasets is to understand whether the platform is taking down as much harmful content as possible without accidentally also taking down non-violative content. Unfortunately, Rule 4(1)(d) merely requires SSMIs to report the number of links taken down during their content moderation (this is re-iterated by the MeitY’s FAQs on the Intermediary Guidelines). The problems with an overtly simplistic approach come to the fore upon an examination of the actual reports published by SSMIs.   

Contents of SSMI reports – proactive moderation

Based on its latest monthly transparency reports, Twitter proactively suspended 39,588 accounts while Google used automated tools to remove 338,938 pieces of content. However, these figures only document the scale of proactive monitoring and do not provide any insight into the accuracy of the platforms’ moderation – how accurate is the moderation in distinguishing between violative and non-violative content. The reporting also does not specify whether this content was taken down using solely automated tools, or some mix of automated tools and human review or oversight. Meta (reporting for Facebook and Instagram) reports the volume of content proactively taken down, but also provides a “Proactivity Rate”. The Proactivity Rate is defined as the percentage of content flagged proactively (before a user reported it) as a subset of all flagged content. Proactivity Rate = [proactively flagged content ÷ (proactively flagged content + user reported content)]. However, this metric is also of little use in understanding the accuracy of Meta’s automated tools. Take the following example:

Assume a platform has 100 pieces of content, of which 50 pieces violate the platforms terms of service and 50 do not. The platform relies on both proactive monitoring through automated tools and user reporting to identify violative content. Now, if the automated tools detect 49 pieces of violative content, and a user reports 1, the platform states that: ‘49 pieces of content were taken down pursuant to proactive monitoring at a Proactivity Rate of 98%’. However, this reporting does not inform citizens or regulators: (i) if the 49 pieces of content identified by the automated tools are in fact the 49 pieces that violate the platform’s terms of service (or whether the tools mistakenly took down some legitimate, non-violative content); (ii) how many users saw but did not report the content that was eventually flagged by automated tools and taken down; and (iii) what level and extent of human oversight was exercised in removing content. A high proactivity rate merely indicates that automated tools flagged more content than users, which is to be expected. Simply put, numbers aren’t everything, they only disclose the scale of content moderation and not its quality.  

This criticism begs the question, how do you understand the quality of proactive moderation? The Santa Clara Principles represent high level guidance on content moderation practices developed by international human rights organisations and academic experts to facilitate platform accountability with respect to users’ speech. The Principles require that platforms report: (i) when and how automated tools are used; (ii) the key criteria used by automated tools in making decisions; (iii) the confidence, accuracy, or success rate of automated tools, including in different languages; (iv) the extent of human oversight over automated tools; and (v) the outcomes of appeals against moderation decisions made by automated tools. This last requirement of reporting the outcome of appeals (how many users successfully got content reinstated after it was taken down by proactive monitoring) is a particularly useful metric as it provides an indicator of when the platforms themselves acknowledge that its proactive moderation was inaccurate. Draft legislation in Europe and the United States requires platforms to report how often proactive monitoring decisions are reversed. Mandating the reporting of even some of these elements under the Intermediary Guidelines would provide a clearer picture of the accuracy of proactive moderation.

Finally, it is relevant to note that Rule 4(4) of the Intermediary Guidelines requires that the automated tools for proactive monitoring of certain classes of content must be ‘reviewed for accuracy and fairness’. The desirability of such proactive monitoring aside, Rule 4(4) is not self-enforcing and does not specify whoshould undertake this review, how often it should be carried out, and whom the results should be communicated to.  

Contents of SSMI reports – reactive moderation

Transparency reporting with respect to reactive moderation aims to understand trends in user reporting of content and a platform’s responses to user flagging of content. Rule 4(1)(d) requires platforms to disclose the “details of complaints received and actions taken thereon”. However, a perusal of SSMI reporting reveals how the broad discretion granted to SSMIs to frame their reports is undermining the usefulness of the reporting.  

Google’s transparency report has the most straightforward understanding of “complaints received”, with the platform disclosing the number of ‘complaints that relate to third-party content that is believed to violate local laws or personal rights’. In other words, where users raise a complaint against a piece of content, Google reports it (30,065 complaints in February 2022). Meta on the other hand only reports complaints from: (i) a specific contact form, a link for which is provided in its ‘Help Centre’; and (ii) complaints addressed to the physical post-box mail address published on the ‘Help Centre’. For February 2022, Facebook received a mere 478 complaints, of which only 43 pertained to content (inappropriate or sexual content), while 135 were from users whose accounts have been hacked, and 59 were from users who had lost access to a group or page. If 43 user reports a month against content on Facebook seems suspiciously low, it likely is – because the method of user reporting of content that involves the least amount of friction for users (simply clicking on the post and reporting it directly) bypasses the specific contact form that Facebook uses to collate India complaints, and thus appears to be absent from Facebook’s transparency reporting. Most of Facebook’s 478 complaints for February have nothing to do with content on Facebook and offer little insight into how Facebook responds to user complaints against content or what types of content users report.

In contrast, Twitter’s transparency reporting expressly states that it does notinclude non-content related complaints (e.g., a user locked out of their account), instead limiting its transparency reporting to content related complaints – 795 complaints for March 2022: 606 of abuse or harassment, 97 of hateful conduct, and 33 of misinformation were the top categories. However, like Facebook, Twitter also has both a ‘support form’ and allows users to report content directly by clicking on it, but fails to specify from what sources “complaints” are compiled from for its India transparency reports. Twitter merely notes that ‘users can report grievances by the grievance mechanism by using the contact details of the Indian Grievance Officer’.

These apparent discrepancies in the number of complaints reported bear even greater scrutiny when the number of users of these platforms is factored in. Twitter (795 complaints/month) has an estimated 23 million users in India while Facebook (406 complaints/month) has an estimated 329 million users. It is reasonable to expect user complaints to scale with the number of users, but this is evidently not happening suggesting that these platforms are using different sources and methodologies to determine what constitutes a “complaint” for the purposes of Rule 4(1)(d). This is perhaps a useful time to discuss another SSMI, ShareChat.

ShareChat is reported to have an estimated 160 million users, and for February 2022 the platform reported 56,81,213 user complaints (substantially more than Twitter and Facebook). These complaints are content related (e.g., hate speech, spam etc.) although with 30% of complaints merely classified as ‘Others’, there is some uncertainty as to what these complaints pertain to. ShareChat’s reports states that it collates complaints from ‘reporting mechanism across the platform’. This would suggest that, unlike Facebook (and potentially Twitter), it compiles user complaint numbers from all methods a user can complain against content and not just a single form tucked away in its help centre documentation. While this may be a more holistic approach, ShareChat’s reporting suffers from other crucial deficiencies. Sharechat’s report makes no distinction between reactive and proactive moderation, merely giving a figure for content that has taken down. This makes it hard to judge how ShareChat responded to these over 56,00,000 complaints.    

Conclusion

Before concluding, it is relevant to note that no SSMI reporting discusses content that has been subjected to reduced visibility or algorithmically downranked. In the case of proactive moderation, Rule 4(1)(d) unfortunately limits itself to content that has been “removed”, although in the case of reactive moderation, reduced visibility would come within the ambit of ‘actions taken in response to complaints’ and should be reported on. Best practices would require platforms to disclose when and what content is subjected to reduced visibility to users. Rule 4(1)(d) did not form part of the draft intermediary guidelines that were subjected to public consultation in 2018, rather appearing for the first time in its current form in 2021. Ensuring broader consultation at the time of drafting may have resulted in such regulatory lacunae being eliminated and a more robust framework for transparency reporting.

That said, getting meaningful transparency reporting is a hard task. Standardising reporting procedures is a detailed and fraught process that likely requires platforms and regulators to engage in a consultative process – see this document created by Daphne Keller listing out potential problems in reporting procedures. Sample problem: “If ten users notify platforms about the same piece of content, and the platform takes it down after reviewing the first notice, is that ten successful notices, or one successful notice and nine rejected ones?” Given the scale of the regulatory and technical challenges, it is perhaps unsurprising that the transparency reporting under the Intermediary Guidelines has gotten off to a rocky start. However, Rule 4(1)(d) itself offers an avenue for improvement. The Rule allows the MeitY to specify any additional information that platforms should publish in their transparency reports. In the case of proactive monitoring, requiring platforms to specify exactly how automated tools are deployed, and when content take downs based on these tools are reversed would be a good place to start. The MeitY must also engage with the functionality and internal procedures of SSMIs to ensure that reporting is harmonised to the extent possible. For example, reporting a “complaint” for Facebook and ShareChat should ideally have some equivalence. This requires, for a start, MeitY to consult with platforms, users, civil society, and academic experts when thinking about transparency.

CCG’s Week in Review: Curated News in Information Law and Policy [August 26-September 2]

MeitY sought views on ‘non-personal data’; India and France announce joint research consortium on AI and digital partnership after NSA-level talks; Section 144 CrPC imposed in areas of Assam anticipating unrest after the publication of the NRC list as the MHA holds a high-level security meet on Kashmir; and the tussle between MeitY and the Niti Aayog for control over the Rs. 7000 cr AI project continues – presenting this week’s most important developments at the intersection of law and tech.

Aadhaar

  • [Aug 27] Aadhaar integration can weed out fake voters: UIDAI’s Ajay Bhushan Pandey, Business Standard report.
  • [Aug 27] Government to intensify Aadhaar enrolment in J&K after Oct 31: Report, Medianama report; Times Now report; The Quint report
  • [Aug 27] Interview: Why I filed a case to link Aadhaar and Social Media Accounts, The Quint report.
  • [Aug 27] Aadhaar database cannot be hacked even after a billion attempts: Ravi shankar Prasad, Money Control report.
  • [Aug 27] Most dangerous situation: Justice Srikrishna on EC-Aadhaar linking, The Quint report.
  • [Aug 28] Aadhaar ads to women’s problems in India. Here’s why. The Wire report.
  • [Aug 28] What Centre will tell Supreme Court on Aadhaar and social media account linkage, The Hindustan Time report.
  • [Aug 28] All residents of an MP village have the same date of birth on their Aadhaar, Business Standard report.
  • [Aug 29] Blood banks advised to ask for donors’ Aadhaar cards, Times of India report.
  • [Aug 29] Aadhaar continues to evolve and grow as India issues biometric seafarers’ ID, Biometric Update report.
  • [Aug 31] Aadhaar mandatory for farmers to avail crop loan in Odisha, Odisha Sun Times report.
  • [Sep 1] NRIs to get Aadhaar sans 180-day wait in 3 months, The Hindu report.
  • [Sep 1] Aadhaar-liquor link to check bottle littering? Deccan Herald report.
  • [Sep 1] Linking Aadhaar with social media can lead to insidious profiling of people, says Apar Gupta, Times of India report.

Digital India

  • [Aug 27] NASSCOM-DSCI on National Health Stack: separate regulatory body for health, siloed registries, usage of single ID, Medianama report.
  • [Aug 27] Govt looks to develop electronics component manufacturing base in India: MeitY Secretary, YourStory report; Money Control report.
  • [Aug 30] India is encouraging foreign firms to shift biz from China: report, Medianama report; Reuters report.
  • [Aug 30] Wipro, Google to speed up digital shift of enterprises, ET Telecom report.
  • [Aug 30] Government committed to reach public via technology, Times of India report.
  • [Aug 31] MeitY and Google tie up to Build for Digital India, Livemint report; India TV report; ANI report; The Statesman report; Inc42 report.
  • [Sep 1] Govt is setting up high-tech R&D facilities for India Inc to encourage big-bang projects, ET Tech report.
  • [Sep 1] Digitalisation is now forcing NASSCOM to reinvent itself, ET Tech report.

Free Speech

  • [Aug 26] IAS Officer who quit over ‘losing freedom of expression’ was facing disciplinary action for misconduct, Swarajya Magazine report.
  • [Aug 27] Withdraw media curbs in Kashmir, The Hindu report.
  • [Aug 27] EU data caught in Facebook audio transcribing, Politico report.
  • [Aug 30] BJP issues gag order on Pragya Thakur after ‘black magic’ remark post Arun Jaitley’s death. News 18 report.
  • [Aug 31] Chargesheet filed against ex-Union Minister Salman Khurshid over remark in UP CM Yogi Adityanath, India Today report.
  • [Aug 31] Rafale deal: Rahul Gandhi summoned by Mumbai court for calling Narendra Modi ‘commander-in-thief’, Scroll.in report.
  • [Aug 31] Media freedom being curbed, says Mamata, The Hindu report.
  • [Sep 1] Madurai man booked for Facebook post against Centre, Army, Times of India report.

Internet Shutdowns

  • [Aug 26] Internet suspended in Indonesia’s Papua region for ‘ security and order’ amid protests, Medianama report.
  • [Aug 29] Months after pledge to open internet, Ethiopia disrupts connectivity amidst communal violence, Global Voices report.

Data Protection and Privacy

  • [Aug 27] Government’s approach to data is dangerous, says Justice Srikrishna, Medianama report.
  • [Aug 27] Microsoft’s lead EU data watchdog is looking into fresh Windows 10 privacy concerns, Tech Crunch report.
  • [Aug 30] This Week in Tech: Facebook’s privacy pivot (business model not included), The New York Times report.
  • [Aug 31] MeitY seeks views on non-personal data, ET Tech report.
  • [Aug 31] Google to pay out $150-200m over YouTube privacy claims: reports, The Hindu report.
  • [Sep 2] Let data protection Bill deal with personal health data, says IAMAI, Business Standard report.

Intermediary Liability

  • [Aug 27] Government notices and issue in TikTok’s ShareChat notices: To ask TikTok how its intermediary status is consistent with claims on owning content, ET Telecom report; Inc42 report.

E-Commerce

  • [Aug 27] Thailand to tax e-commerce companies from next year, Medianama report.
  • [Aug 27] NRAI sends notices to Swiggy, Zomato, others on deep discounting, lack of transparency, Tech Circle report.
  • [Aug 29] MeitY may not include E-commerce data in privacy bill, The Economic Times report; Medianama report; Inc42 report.
  • [Aug 29] 30% local sourcing FDI rule on single brand retailers relaxed, physical stores before online sales not necessary, Medianama report.
  • [Aug 29] Amazon moves Supreme Court against direct selling companies: Report, Medianama report; The Economic Times report.
  • [Aug 30] India big enough for both e-commerce and small retailers: Rajiv umar, ET Tech report.
  • [Aug 30] Zomato, Swiggy and NRAI discuss issues, to meet again in September, Medianama report.
  • [Aug 30] DPIIT asks e-commerce firms to upload FDI compliance certifications, Medianama report.
  • [Aug 31] Why restaurants and aggregators are locking horns over discounts, ET Tech report.
  • [Aug 31] CAIT slams Amazon in public discussion over deep discounting, Entrackr report.
  • [Aug 31] E-marketplaces giving preferential treatment to come: Sellers, ET Tech report.
  • [Sep 2] Swiggy likely to cap restaurant commissions at 25%, ET Tech report.

Digital Payments and FinTech

  • [Aug 30] Another extension for e-wallets: RBI gives 6 months to complete KYC, Entrackr report.
  • [Sep 2] Banks may take 3 years for tech merger, ET Tech report.

Cryptocurrencies

  • [Aug 25] IRS sends new round of letter to Bitcoin and Crypto holders, Coin Telegraph report.
  • [Aug 26] 25]year old Bitcoin seller faces life sentence for unlicensed exchange, Coin Desk report.
  • [Aug 26] Telegram’s 300 million users could soon be trading Bitcoin and Crypto- Despite serious security warning, Forbes report.
  • [Aug 28] Crypto-jacking virus infects 850,000 serves, hackers run off with millions, Coin Desk report.
  • [Aug 30] UN Official: Crypto makes policing child trafficking ‘exceptionally difficult’, Coin Desk report.
  • [Aug 30] How do we get crypto currency to circulate as money? This experiment might hold the answer, The Print report.
  • [Aug 30] Privacy in Crypto: The Impact of Rising Terrorism Concerns, Forbes report.
  • [Aug 28] Telegram to release its cryptocurrency by October 31, Medianama report; ET Markets report.

Tech and Law Enforcement

  • [Aug 26] End-to-end encryption not essential to WhatsApp as a platform: Tamil Nadu Advocate General, Medianama report.
  • [Aug 27] WhatsApp traceability vulnerable to falsification, claims IFF expert submission, Firstpost report; Medianama report.
  • [Aug 31] A new kind of cybercrimes uses AI and your voice against you, Quartz report.

Tech and National Security

  • [Aug 26] Russia to supply critical components of Gaganyaan, Free Press Journal report.
  • [Aug 26] CAG report on offset deal in Rafale contract to be tabled in Winter Session: Report, News Nation report.
  • [Aug 27] Gaganyaan Mission: Russia to train four Indian astronauts from November, DNA India report.
  • [Aug 27] Centre inks Rs 380 cr deal with private firm for nine precision approach radars, DNA India report.
  • [Aug 27] Navy needs “assured” budget support to build capacity: CHief, The Economic Times report; The Indian Express report; Outlook India report.
  • [Aug 27] ITI Nagpur students to learn to assemble Rafale jets, The Economic Times report.
  • [Aug 27] Incentivise pvt sector for defence production: Brookings, Outlook India report.
  • [Aug 28] Amazon and Microsoft unchallenged in $10bn ‘Jedi’ contract review, Financial Times report.
  • [Aug 28] India’s HAL deepens private sector engagement through Make-II initiative, Jane’s 360 report.
  • [Aug 29] NSA-level meet today, France keen to sell second batch of 36 Rafales, The Indian Express report; Financial Express report; ANI News report.
  • [Aug 30] Russia set to offer submarines during Modi-Putin summit, Defence Aviation Post report.
  • [Aug 30] India must be prepared to face any threat: Vice President Venkaiah Naidu, The New Indian Express report.
  • [Aug 31] US to use fake social media to check on people entering the country, India Today report.

Cybersecurity

  • [Aug 26] Ransomware threat raises National Guard’s role in state cybersecurity in the United States, Statescoop report.
  • [Aug 26] The Pentagon wants to bolster Defense Innovation Unit’s Cyber defenses, Nextgov report.
  • [Aug 27] The importance of training: Cybersecurity awareness as a firewall, Forbes report.
  • [Aug 28] Why cybersecurity is a central ingredient in evolving digital business models, Financial Express report.
  • [Aug 28] Cyber security and the finance sector: the need for stronger data protection capabilities, Security Boulevard report.
  • [Aug 28] India to unveil cybersecurity strategy policy early next year, Financial Express report; Inc42 report.
  • [Aug 28] Face it – Biometrics to be big in cybersecurity, Forbes report.
  • [Aug 28] MHA has taken various measures to counter cyber threat: MoS Kishan Reddy, United News of India report.
  • [Aug 30] Google says hackers have put ‘monitoring implants’ in iPhones for years, The Guardian report; DW report.
  • [Aug 30] Employee errors responsible for half of cybersecurity incidents: report, The Hindustan Times report.
  • [Aug 30] Despite changes by Microsoft, Windows 10 might still be remotely spying on you, Digital Trends report.
  • [Aug 30] Only 5-10% pharma firms have cybersecurity: Expert, Times of India report.

Internal Security: J&K

  • [Aug 27] Kashmir updates: UN Chief urges all parties to avoid escalation, India Today report.
  • [Aug 27] Kashmir: MHA to hold high level security meet; SC will hear Faesal and Shehla Rashid, The Week report.
  • [Aug 29] There is only fear and no ‘freedom’ in the Northeast and J&K, The Wire report.
  • [Aug 29] ‘Feel unsafe at home’: J&K residents accuse security forces of raiding houses, arresting ‘innocent’ Kashmiri youth under Public Safety Act, Firstpost report.
  • [Aug 30] Jammu and Kashmir: Rumours fly thick but slow in absence of communication, The Economic Times report.
  • [Aug 30] Army Chief to review security in J&K today, his first visit after Art 370 repeal, The Hindustan Times report.
  • [Aug 31] Mobile services restored partially in Kashmir’s Kupwara district, ET Telecom report.

Internal Security: North East and the NRC

  • [Aug 29] Security measures tightened in Assam, Sec 144 CrPC in Guwahati ahead of final NRC, India Today report.
  • [Aug 30] Assam police declare 14 districts as sensitive areas, Times of India report.
  • [Aug 30] How the National Citizenship Registration in Assam is shaping a new national identity in India, The Conversation report.
  • [Aug 30] NRC not to solve foreigner problem: Himanta Biswa Sarma, Deccan herald report.
  • [Aug 31] No Aadhaar from elsewhere for those excluded from NRC, ET Tech report.
  • [Aug 31] Assam on edge a day before publication of NRC, India Today report.
  • [Sep 1] Assam BJP, Opposition unhappy with updated NRC, India Today report.
  • [Sep 1] Assam NRC final list: Centre in no hurry for follow-up, The Hindu report.
  • [Sep 1] Happy to know how many are doubtful citizens, says AIUDF, The Telegraph report.
  • [Sep 1] Indian citizens register excludes 1.9m Assam residents, Financial Times report.

Telecom/5G

  • [Aug 26] India will not compromise on security of telecom networks: Dhotre, ET Telecom report.
  • [Aug 27] 5G spectrum sale may be deferred to early 2020, ET Telecom report.
  • [Aug 27] Govt invites bids to select agency for conducting spectrum auction, ET Telecom report.
  • [Aug 28] Reliance Jio records highest telecom revenue market share in Q1FY20, Medianama report.
  • [Aug 31] Govt focusing on improved telecom connectivity in NE, ET Telecom report.
  • [Aug 28] 3G network to shut by December, 5G adoption not expected, Phonepe, Paytm and more, Medianama report.

More on Huawei

  • [Aug 26] 5G trials: China aggression will work against Huawei, say India officials, India Express report.
  • [Aug 27] New Huawei OS Shock: ‘Confirmation’ of Russian Software for mobile devices, Forbes report; Reuters report.
  • [Aug 27] Huawei: UK to make 5G decision ‘by the autumn’, BBC News report.
  • [Aug 29] Huawei’s next flagship phone blocked from using Google apps, The Guardian report.
  • [Aug 30] Huawei under probe by US prosecutors over new allegations, ET Telecom report; Business Standard report.
  • [Sep 1] Huawei just launched 5G in Russia with Putin’s Support: ‘Hello Splinternet’, Forbes report.

Emerging Tech and AI

  • [Aug 27] Niti Aayog, MeitY spar over Rs. 7,000 crore AI mission, ET Telecom report; Inc42 report; Entrackr report.
  • [Aug 27] India, France announce joint research consortium on AI and a digital partnership, Medianama report.
  • [Aug 28] Elon Musk and Jack Ma debate AI at China Summit, Bloomberg report.
  • [Aug 28] Is this Aadhaar of the future? Facial biometric technology-based chip-enabled cards issues, The Economic Times report.
  • [Aug 28] National security imperative to become $5trillion economy: Amit Shah, Livemint report; The Asian Age report.
  • [Aug 29] Swedish school fined over use of facial recognition, Lexology report.

Big Tech

  • [Aug 26] India is important, that’s why bringing hardware devices here: Google, ET Telecom report.
  • [Aug 26] Facebook wins appeal against German Data-Collection ban, The Wall Street Journal report.
  • [Aug 26] Instagram’s latest assault on Snapchat is a messaging app called Threads, The Verge report.
  • [Aug 28] Google is moving Pixel production from China to an old Nokia factory in Vietnam, The Verge report.
  • [Aug 30] Google expands scope of its bug bounty programme, unveils data protection reward programme for developers, NDTV Gadgets 360 report.

Opinions and Analyses

  • [Aug 25] Jon Evans, Tech Crunch, Crypto means Cryptotheology.
  • [Aug 26] Guest Author, Medianama, Should Indian Copyright law prevent text and data mining?
  • [Aug 26] Vishal Chawla, Analytics India Magazine, Why IoT security standards are crucial in preventing hackers from stealing your data.
  • [Aug 26] The Hindu Editorial, ON the wrong side: On PCI backing Kashmir restrictions.
  • [Aug 26] Robert S Taylor, Lawfare, How to measure Cybersecurity.
  • [Aug 26] Mike Giglio, Defense One, China’s Spies Are on the Offensive. Can the US Fend Them Off?
  • [Aug 27] Gurshabad Grover, The Hindu, A judicial overreach into matters of regulation.
  • [Aug 27] Maj Gen Harsha Kakkar, Bharat Shakti, Foreign Policy and National Security.
  • [Aug 27] A Vinod Kumar, Institute for Defence Studies and Analyses, ‘No First Use’ is Not Sacrosanct: Need a Theatre-Specific Posture for Flexible Options.
  • [Aug 27] Jack Cable, Harvard Business Review, Every computer science degree should require a course in cybersecurity.
  • [Aug 27] Rahul Singh, The Hindustan Times, Key decisions underline govt’s focus on building stronger military.
  • [Aug 27] The Economic Times Opinion, Aadhaar linkage with social media is troublesome.
  • [Aug 28] Vikram Koppikar, Money Control, Aadhaar and Social Media: It’s a delicate balance between security and privacy. 
  • [Aug 28] Abhijit Singh. The Hindu, The CHief of Defence Staff needs an enabling institutional infrastructure.
  • [Aug 28] Samantha Ravish, Defense One, The US must prepare for a Cyber ‘Day After’.
  • [Aug 28] Mike Masnick, Tech Dirt, Protocols, not platformsL A technological approach to free speech.
  • [Aug 29] Dhruva Jaishankar, The Hindustan Times, The saga of India’s indigenous defence production.
  • [Aug 29] The Print, Does War & Peace taunt show how poorly equipped India judges are to handle security cases?
  • [Aug 30] Rohan Seth, The Asian Age, Wider debate needed on major changes in data protection law.
  • [Aug 30] Amit Cowshish, Institute for Defence Studies and Analyses, CDS: A pragmatic blueprint required for implementation.
  • [Aug 30] Crystal Lee and Jonathan Zong, Slate, Consent is not an ethical rubber stamp.
  • [Aug 30] Gopal Krishna, Business Today, Why the promised right to privacy and data protection law hasn’t been enacted yet. 
  • [Aug 31] Bidanda Chengappa, Deccan Herald, Peacetime spying is legitimate.
  • [Aug 31] Sandipan Deb, Livemint, When social media monopolies prey on freedom of expression.

SC asks Centre how to regulate Sexually Exploitative Content on Social Media

Written by Siddharth Manohar

The Supreme Court on Friday rejected a petition to block websites of dominant social media platforms, on the ground that they were used to spread videos of gang rapes and to facilitate a market for child prostitution. The two Judge bench of Justices UU Lalit and Madan B Lokur reasoned that blocking these sites was not a feasible solution, as it would set a trend of blocking wide parts of internet access to solve specific problems with how it is used.

The decision is in light of a petition filed by Hyderabad-based NGO Prajwala, asking the Court to ban social media websites used to traffic children and put in place a mechanism to monitor the content circulated through mobile applications such as Whatsapp. The same bench had in April recognized the importance of regulating objectionable sexual material being circulated through social media applications. This was based on suo-motu cognizance of a letter addressed to the then Chief Justice of India HL Dattu, asking the Court to take action against those responsible for posting a video of an incident of gang rape on social media.

The Court has asked the Additional Solicitor General to look into why no action was taken against the social media platforms by the police who were dealing with the cases.  The Centre had earlier communicated that it is difficult to monitor content which is circulated through mobile phones, and even more so to find the culprit starting the process. Tracking the user becomes much easier, they said, when a computer is used in spreading the objectionable content.

The Court did however refer to the Central Government the important question of whether these social media platforms can be prosecuted for their role in spreading offensive material such as video recordings of rape and child pornography. The Court added that they would wait for a response from the Central Government before deciding what action ought to be taken in the matter.

Earlier orders in the matters can be accessed here, and here.

The Anatomy of Internet Shutdowns – II (Gujarat & Constitutional Questions)

Written by Nakul Nayak

In the last post I discussed the panoply of laws surrounding internet shutdowns in India and concluded that though there might be indirect regulatory connections to justify such shutdowns, they appear to be flimsy at best. In this post, I shall discuss the current internet shutdown in Gujarat in particular and whether these executive actions would pass constitutional muster.

As news reports would tell you, the Patidar reservation agitation began sometime in July, 2015. The community’s sole demand is to receive reservation benefits under India’s complex affirmative action formula. As momentum for the agitation picked up, a major demonstration dubbed Kranti Rally was organised in Ahmedabad, Gujarat’s largest city, with over half a million reportedly attending. Hardik Patel, the face of the movement, was arrested for not obtaining permission to stay on the ground after the rally and was later released. This coupled with instances of police violence heightened tensions within the state, manifesting itself in violence and vandalism of public property. That very night, Hardik Patel sent a WhatsApp message to his followers, urging them to maintain calm and simultaneously calling for a bundh the next day. Soon after, Whatsapp along with mobile internet in many parts of Gujarat shut down. They have been ever-since in major cities.

The current mobile internet disruptions are a blot on free speech in Gujarat. In the latter section of this post, I would go on to explain how this disruption is constitutionally overbroad and may constitute illegitimate prior restraint. However, taking off from where I left in the last post, my first quibble would be against the non-conformity with procedural propriety (assuming there is one) in directing the present blockage. Sec. 69A of the IT Act, which appears to be textually closest to affording kill switch powers, may only be initiated and implemented by the Central Government. On the other hand, sec. 5(2) of the Telegraph Act, which grants powers to detain or stop the transmission of messages, may be exercised by both the central and state governments. Contractually speaking, it appears from a liberal view of the Licence Agreement between the telecos and the Central Government, one may similarly conclude that both the central and state governments can direct actions of service disruptions.

However, when the Indian Express asked Dhananjay Dwivedi, Secretary of Science and Technology Department, when data services would be restored, he is reported to have said that “the decision was not taken by the state government but by local administration and police”. If this is true, then the directions to the telecos would smack of procedural lapses. Local authorities (as opposed to the state government) have nowhere been recognized under either of the statutes (IT Act or the Telegraph Act) or the Licence Agreement. They have assumed powers of ordering network disruptions without actually possessing them. In fairness, there are other reports that the District Collectors of Ahmdeabad, Vadodara etc. alongwith the police have taken this step. Technically, they would fall under the ambit of the ‘state government’. However, their websites have remained unavailable for access of notifications or circulars in this respect.

On substance, the current internet shutdown in Gujarat may fall foul to constitutional free speech standards on two counts:

a) overbreadth and

b) prior restraint.

The Gujarat authorities have directed telecos to disable access to 2G and 3G mobile data in the entire state. In addition, according to some reports, certain social media websites such as WhatsApp, Facebook, Twitter and Instagram have been specifically blocked. Thus, netizens may not even access these websites through broadband. While the wisdom of the move to disable social media is debateable, it is undeniable that the absolute constraint on data access is inimical to the central notion of Article 19(2). As the Supreme Court reiterated in Shreya Singhal, “restrictions on the freedom of speech must be couched in the narrowest possible terms”. Consider the celebrated SC decision in Kameshwar Prasad, where a rule, formulated in the interest of public order, forbidding participation in any demonstrations by public servants was challenged. After holding that demonstrations constitute expression, the Court went on to characterise the various kinds of demonstrations, finding that some may be peaceful, some passive and some capable of public disorder. Finally, the Court found the blanket-ban unconstitutional, concluding

The vice of the rule, in our opinion, consists in this that it lays a ban on every type of demonstration – be the same however innocent and however incapable of causing a breach of public tranquility and does not confine itself to those forms of demonstrations which might lead to that result.[1]

Now substitute “demonstrations” with “internet communications”. Such communications quite clearly stand as a form of speech under Art. 19(1)(a). Not all kinds of internet communications cause incitement or are capable of disturbing the public tranquillity. By directing a blanket ban on all mobile internet communications – innocent or otherwise – the Gujarat authorities’ actions are quite clearly overbroad. Its effects are not just felt by the protesters and rioters that it was meant for, but also hinders community life by hampering communication lines between ordinary citizens and debilitates economic life by demoralising businesses employing said ordinary citizens.

Further, the shutdown has been grounded on the public order/incitement restrictions of Art. 19(2) i.e. to prevent public disorder and the incitement to offences of fellow citizens. However, any public order speech restriction must be narrowly tailored and have “proximate and direct nexus with the expression”.[2] The Shreya Singhal judgment arrived at the Clear and Present Danger test in public order restrictions, expounded first by Justice Holmes in Schenck v US.[3] It may be relevant to note here that the US itself has abandoned this test for the more speech-liberal Brandenburg[4] test of imminent lawless action. The Indian SC recognised this as much in Arup Bhuyan. According to this standard, three conditions to be proven for a speech to constitute incitement;

Intent: That the speech must have the object of promoting violence

Imminence: That the speech must lead to imminent lawless action, and

Likelihood: That the speech was likely to create such lawless action.

The dichotomy in standard-setting aside, it is evident that not all communications over mobile internet have any connection with the Patidar reservation. In fact, most communication in any public order situation revolve around safety and emergency, with small pockets of incitement. Conversely, the blanket ban on all mobile texting conversation and blockage of social media in particular would fail the test of free speech.

But even beyond constitutional considerations, one must take note of the utilitarian benefits social media websites play during emergency situations. They may be used to mass-communicate first-hand information and alert authorities and first responders. In 2011, when an earthquake struck the US, more than 40,000 earthquake-related tweets were up on the web within a minute of the first shock. This was highly appreciated by Emergency Management professions. Closer home, the Nepal earthquake highlights the important role played by open data and social media in obviating confusion and consternation in the aftermath of the disaster. Facebook Safety Check was also instrumental in this regard.[5]

Now might be an appropriate time to traverse to our next argument on prior restraint. These may be defined as state restrictions on speech before its very publication. Prior restraints are generally regarded as unconstitutional under Indian law. In two celebrated decisions – Brij Bhushan and Romesh Thappar – the Supreme Court found prior restraints on print media to be generally unconstitutional. In Auto Shankar’s case, the Court went even further and held that defamatory materials of state officials may never be subject to prior restraints. The remedy of the officials would lie in post-publication prosecutions. However, in KA Abbas, the Supreme Court took a step back and struck a different balance between cinema and free speech. The Court decided to treat “motion pictures” differently from other forms of art and expression. The Court’s rationale arose

… from the instant appeal of the motion picture, its versatility, realism (often surrealism), and its coordination of the visual and aural senses. The art of the cameraman, with trick photography, vistavision and three dimensional representation thrown in, has made the cinema picture more true to life than even the theatre or indeed any other form of representative art. The motion picture is able to stir up emotions more deeply than any other product of art. … It is also for this reason that motion picture must be regarded differently from other forms of speech and expression. A person reading a book or other writing or hearing a speech or viewing a painting or sculpture is not so deeply stirred as by seeing a motion picture.

The Gujarat internet disruption portends important questions of prior restraint in an hitherto jurisprudentially unconventional medium; the internet. By disallowing access to mobile internet and blocking certain social media websites in particular, the authorities have no doubt imposed a wide array of prior restraint; foreclosing communication, business transactions, file sharing etc. Would mobile internet communication fall within the Brij BhushanRomesh Thappar print media standard of prior restraint, which lays an extremely strict scrutiny of speech restrictions? Or would file sharing, being a common feature of communication apps like WhatsApp/Instagram/Facebook/Youtube, attract the KA Abbas motion picture standard of prior restraint? Or would mobile internet communications warrant an all-together different prior restraint standard?

There appears to be no clear-cut answer yet. [Edit: Moreover, Nariman J., in Shreya Singhal accepted an intelligible differentia argument under Art. 14,[6] but limited it only to the creation of technology-specific offences.][7] The Court very clearly held that the threshold for curbing content on the internet cannot all-together be new. Yet the reasonableness of prior restraint standards over the internet still seems to be in limbo. To take an example, there are apps like Whatsapp that are *more* electronic in a sense than ordinary internet chatting apps that don’t allow file (photo/video) transfers. Would the latter merit the Brij Bhushan standard or the KA Abbas standard?

Concluding our two-part discussion on internet shutdowns, I observe the following:

  1. Internet shutdowns in India fall in a sort of regulatory no-man’s land. Accordingly, central, state, and local authorities are able to order them without thinking twice about accountability.
  2. The current internet disruptions in Gujarat do not distinguish between innocuous and incendiary speech, restricting all of them in a blanket-ban that is resonant of policies of dictatorial regimes such as Syria and Egypt. Devoid of being narrowly tailored, the shutdown is fatally overbroad.
  3. The prior restraint standards that may be applicable over internet communications must be addressed by the Courts/Parliament urgently. Sandwiched between conventional media and motion pictures, internet texting apps suffer from shades of grey that desperately need colouring.

Nakul Nayak was a Fellow at the Centre for Communication Governance from 2015-16.

[1] Kameshwar Prasad, page 384.

[2] S. Rangarajan v. P. Jagjivan & Ors., (1989) 2 SCC 574, para. 45.

[3] 63 L. Ed. 470, at 473-74.

[4] Brandenburg v. Ohio, 23 L. Ed. 2d 430 (1969).

[5] My thanks to Joshita Pai for pointing this to me.

[6] Shreya Singhal, paras. 97-98.

[7] I should thank Gautam Bhatia for correcting me on this.

(Nakul is a Research Fellow at the Centre)