Comments on the draft amendments to the IT Rules (Jan 2023)

The Ministry of Electronics and Information Technology (“MeitY”) proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Guidelines”) on January 17, 2023. The draft amendments aim to regulate online gaming, but also seek to have intermediaries “make reasonable efforts” to cause their users not to upload or share content identified as “fake” or “false” by the Press Information Bureau (“PIB”), any Union Government department or authorised agency (See proposed amendment to Rule 3(1)(b)(v).) The draft amendments in their current form raise certain concerns that we believe merit additional scrutiny.  

CCG submitted comments on the proposed amendment to Rule 3(1)(b)(v), highlighting its key feedback and concerns. The comments were authored by Archit Lohani and Vasudev Devadasan and reviewed by Sachin Dhawan and Jhalak M. Kakkar. Some of the key issues raised in our comments are summarised below.

  1. Misinformation, fake, and false, include both unlawful and lawful expression

The proposed amendment does not define the term “misinformation” or provide any guidance on how determinations that content is “fake” or “false” are arrived at. Misinformation can include various forms of content, and experts have identified up to seven subtypes of misinformation such as: imposter content; fabricated content; false connection; false context; manipulated content; misleading content; and satire or parody. Different subtypes of misinformation can cause different types of harm (or no harm at all) and are treated differently under the law. Misinformation or false information thus includes both lawful and unlawful speech (e.g., satire is constitutionally protected speech).  

Within the broad ambit of misinformation, the draft amendment does not provide sufficient guidance to the PIB and government departments on what sort of expression is permissible and what should be restricted. The draft amendment effectively provides them with unfettered discretion to restrict both unlawful and lawful speech. When seeking to regulate misinformation, experts, platforms, and other countries have drawn up detailed definitions that take into consideration factors such as intention, form of sharing, virality, context, impact, public interest value, and public participation value. These definitions recognize the potential multiplicity of context, content, and propagation techniques. In the absence of clarity over what types of content may be restricted based on a clear definition of misinformation, the draft amendment will restrict both unlawful speech and constitutionally protected speech. It will thus constitute an overbroad restriction on free speech.

  1. Restricting information solely on the ground that it is “false” is constitutionally impermissible

Article 19(2) of the Indian Constitution allows the government to place reasonable restrictions on free speech in the interest of the sovereignty, integrity, or security of India, its friendly relations with foreign States, public order, decency or morality, or contempt of court. The Supreme Court has ruled that these grounds are exhaustive and speech cannot be restricted for reasons beyond Article 19(2), including where the government seeks to block content online. Crucially, Article 19(2) does not permit the State to restrict speech on the ground that it is false. If the government were to restrict “false information that may imminently cause violence”, such a restriction would be permissible as it would relate to the ground of “public order” in Article 19(2). However, if enacted, the draft amendment would restrict online speech solely on the ground that it is declared “false” or “fake” by the Union Government. This amounts to a State restriction on speech for reasons beyond those outlined in Article 19(2), and would thus be unconstitutional. Restrictions on free speech must have a direct connection to the grounds outlined in Article 19(2) and must be a necessary and proportionate restriction on citizens’ rights.

  1. Amendment does not adhere with the procedures set out in Section 69A of the IT Act

The Supreme Court upheld Section 69A of the IT Act in Shreya Singhal v Union of India inter alia because it permitted the government blocking of online content only on grounds consistent with Article 19(2) and provided important procedural safeguards, including a notice, hearing, and written order of blocking that can be challenged in court. Therefore, it is evident that the constitutionality of the government’s blocking power over is contingent on the substantive and procedural safeguards provided by Section 69A and the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009. The proposed amendment to the Intermediary Guidelines would permit the Union Government to restrict online speech in a manner that does not adhere to these safeguards. It would permit the blocking of content on grounds beyond those specified in Article 19(2), based on a unilateral determination by the Union Government, without a specific procedure for notice, hearing, or a written order.

  1. Alternate methods to counter the spread of misinformation

Any response to misinformation on social media platforms should be based on empirical evidence on the prevalence and harms of misinformation on social media. Thus, as a first step, social media companies should be required to provide greater transparency and facilitate researcher access to data. There are alternative methods to regulate the spread of misinformation that may be more effective and preserve free expression, such as labelling or flagging misinformation. We note that there does not yet exist widespread legal and industry consensus on standards for independent fact-checking, but organisations such as the ‘International Fact-Checking Network’ (IFCN) have laid down certain principles that independent fact-checking organisations should comply with. Having platforms label content pursuant to IFCN fact checks, and even notify users when the content they have interacted with has subsequently been flagged by an IFCN fact checker would provide users with valuable informational context without requiring content removal.

Report on Intermediary Liability in India

The question of when intermediaries are liable, or conversely not liable, for content they host or transmit is often at the heart of regulating content on the internet. This is especially true in India, where the Government has relied almost exclusively on intermediary liability to regulate online content. With the advent of the Intermediary Guidelines 2021, and their subsequent amendment in October 2022, there has been a paradigm shift in the regulation of online intermediaries in India. 

To help understand this new regulatory reality, the Centre for Communication Governance (CCG) is releasing its ‘Report on Intermediary Liability in India’ (December 2022).

This report aims to provide a comprehensive overview of the regulation of online intermediaries and their obligations with respect to unlawful content. It updates and expands on the Centre for Communication Governance’s 2015 report documenting the liability of online intermediaries to now cover the decisions in Shreya Singhal vs. Union of India and Myspace vs. Super Cassettes Industries Ltd, the Intermediary Guidelines 2021 (including the October 2022 Amendment), the E-Commerce Rules, and the IT Blocking Rules. It captures the over two decades of regulatory and judicial practice on the issue of intermediary liability since the adoption of the IT Act. The report aims to provide practitioners, lawmakers and regulators, judges, and academics with valuable insights as they embark on shaping the coming decades of intermediary liability in India.

Some key insights that emerge from the report are summarised below:

Limitations of Section 79 (‘Safe Harbour’) Approach: In the cases analysed in this report, there is little judicial consistency in the application of secondarily liability principles to intermediaries, including the obligations set out in Intermediary Guidelines 2021, and monetary damages for transmitting or hosting unlawful content are almost never imposed on intermediaries. This suggests that there are significant limitations to the regulatory impact of obligations imposed on intermediaries as pre-conditions to safe harbour.

Need for clarity on content moderation and curation: The text of Section 79(2) of the IT Act grants intermediaries safe harbour provided they act as mere conduits, not interfering with the transmission of content. There exists ambiguity over whether content moderation and curation activities would cause intermediaries to violate Section 79(2) and lose safe harbour. The Intermediary Guidelines 2021 have partially remedied this ambiguity by expressly stating that voluntary content moderation will not result in an intermediary ‘interfering’ with the transmission under Section 79(2). However, ultimately amendments to the IT Act are required to provide regulatory certainty.

Intermediary status and immunity on a case-by-case basis: An entity’s classification as an intermediary is not a status that applies across all its operations (like a ‘company’ or a ‘partnership’), but rather the function it is performing vis-à-vis the specific electronic content it is sued in connection with. Courts should determine whether an entity is an ‘intermediary’ and whether it complied with the conditions of Section 79 in relation to the content it is being sued for. Consistently making this determination at a preliminary stage of litigation would greatly further the efficacy of Section 79’s safe harbour approach.

Concerns over GACs: While the October 2022 Amendment stipulates that two members of every GAC shall be independent, no detail is provided as to how such independence shall be secured (e.g., security of tenure and salary, oath of office, minimum judicial qualifications etc.). Such independence is vital as GAC members are appointed by the Union Government but the Union Government or its functionaries or instrumentalities may also be parties before a GAC. Further, given that the GACs are authorities ‘under the control of the Government of India’, they have an obligation to abide by the principles of natural justice, due process, and comply with the Fundamental Rights set out in the Constitution. If a GAC directs the removal of content beyond the scope of Article 19(2) of the Constitution, questions of an impermissible restriction on free expression may be raised.

Actual knowledge in 2022: The October 2022 Amendment requires intermediaries to make reasonable efforts to “cause” their users not to upload certain categories of content and ‘act on’ user complaints against content within seventy-two hours. Requiring intermediaries to remove content at the risk of losing safe harbour in circumstances other than the receipt of a court or government order prima facie violates the decision of Shreya Singhal. Further, India’s approach to notice and takedown continues to lack a system for reinstatement of content.  

Uncertainty over government blocking power: Section 69A of the IT Act expressly grants the Union Government power to block content, subject to a hearing by the originator (uploader) or intermediary. However, Section 79(3)(b) of the IT Act may also be utilised to require intermediaries to take down content absent some of the safeguards provided in Section 69A. The fact that the Government has relied on both provisions in the past and that it does not voluntarily disclose blocking orders makes a robust legal analysis of the blocking power challenging.

Hearing originators when blocking: The decision in Shreya Singhal and the requirements of due process support the understanding that the originator must be notified and granted a hearing under the IT Blocking Rules prior to their content being restricted under Section 69A. However, evidence suggests that the government regularly does not provide originators with hearings, even where the originator is known to the government. Instead, the government directly communicates with intermediaries away from the public eye, raising rule of law concerns.

Issues with first originators: Both the methods proposed for ‘tracing first originators’ (hashing unique messages and affixing encrypted originator information) are easily circumvented, require significant technical changes to the architecture of messaging services, offer limited investigatory or evidentiary value, and will likely undermine the privacy and security of all users to catch a few bad actors. Given these considerations, it is unlikely that such a measure would satisfy the proportionality test laid out by current Supreme Court doctrine.

Broad and inconsistent injunctions: An analysis of injunctions against online content reveals that the contents of court orders are often sweeping, imposing vague compliance burdens on intermediaries. When issuing injunctions against online content, courts should limit blocking or removals to specific URLs. Further courts should be cognisant of the fact that intermediaries have themselves not committed any wrongdoing, and the effect of an injunction should be seen as meaningfully dissuading users from accessing content rather than an absolute prohibition.

This report was made possible by the generous support we received from National Law University Delhi. CCG would like to thank our Faculty Advisor Dr. Daniel Mathew for his continuous direction and mentorship. This report would not be possible without the support provided by the Friedrich Naumann Foundation for Freedom, South Asia. We are grateful for comments received from the Data Governance Network and its reviewers. CCG would also like to thank Faiza Rahman and Shashank Mohan for their review and comments, and Jhalak M. Kakkar and Smitha Krishna Prasad for facilitating the report. We thank Oshika Nayak of National Law University Delhi for providing invaluable research assistance for this report. Lastly, we would also like to thank all members of CCG for the many ways in which they supported the report, in particular, the ever-present and ever-patient Suman Negi and Preeti Bhandari for the unending support for all the work we do.

CCG’s Comments to the Ministry of Electronics & Information Technology on the proposed amendments to the Intermediary Guidelines 2021

On 6 June 2022, the Ministry of Electronics and Information Technology (“MeitY”), released the proposed amendments for Part 1 and Part II of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”). CCG submitted its comments on the proposed amendments to the 2021 IT Rules, highlighting its key feedback and key concerns. The comments were authored by Vasudev Devadasan and Bilal Mohamed and reviewed and edited by Jhalak M Kakkar and Shashank Mohan.

The 2021 IT Rules were released in February last year, and Part I and II of the Guidelines set out the conditions intermediaries must satisfy to avail of legal immunity for hosting unlawful content (or ‘safe harbour’) under Section 79 of the Information Technology Act, 2000 (“IT Act”). The 2021 IT Rules have been challenged in several High Courts across the country, and the Supreme Court is currently hearing a transfer petition on whether these actions should be clubbed and heard collectively by the apex court. In the meantime, the MeitY has released the proposed amendments to the 2021 IT Rules which seek to make incremental but significant changes to the Rules.

CCG’s comments to the MeitY can be summarised as follows:

Dilution of safe harbour in contravention of Section 79(1) of the IT Act

The core intention behind providing intermediaries with safe harbour under Section 79(1) of the IT Act is to ensure that intermediaries do not restrict the free flow of information online due to the risk of being held liable for the third-party content uploaded by users. The proposed amendments to Rules 3(1)(a) and 3(1)(b) of the 2021 IT Rules potentially impose an obligation on intermediaries to “cause” and “ensure” their users do not upload unlawful content. These amendments may require intermediaries to make complex determinations on the legality of speech and cause online intermediaries to remove content that may carry even the slightest risk of liability. This may result in the restriction of online speech and the corporate surveillance of Indian internet users by intermediaries. In the event that the proposed amendments are to be interpreted as not requiring intermediaries to actively prevent users from uploading unlawful content, in such a situation, we note that the proposed amendments may be functionally redundant, and we suggest they be dropped to avoid legal uncertainty.

Concerns with Grievance Appellate Committee

The proposed amendments envisage one or more Grievance Appellate Committees (“GAC”) that sit in appeal of intermediary determinations with respect to content. Users may appeal to a GAC against the decision of an intermediary to not remove content despite a user complaint, or alternatively, request a GAC to reinstate content that an intermediary has voluntarily removed or lift account restrictions that an intermediary has imposed. The creation of GAC(s) may exceed Government’s rulemaking powers under the IT Act. Further, the GAC(s) lack the necessary safeguards in its composition and operation to ensure the independence required by law of such an adjudicatory body. Such independence and impartiality may be essential as the Union Government is responsible for appointing individuals to the GAC(s) but the Union Government or its functionaries or instrumentalities may also be a party before the GAC(s). Further, we note that the originator, the legality of whose content is at dispute before a GAC, has not expressly been granted a right to hearing before the GAC. Finally, we note that the GAC(s) may lack the capacity to deal with the high volume of appeals against content and account restrictions. This may lead to situations where, in practice, only a small number of internet users are afforded redress by the GAC(s), leading to inequitable outcomes and discrimination amongst users.

Concerns with grievance redressal timeline

Under the proposed amendment to Rule 3(2), intermediaries must acknowledge the complaint by an internet user for the removal of content within 24 hours, and ‘act and redress’ this complaint within 72 hours. CCG’s comments note that 72-hour timeline to address complaints proposed by the amendment to Rule 3(2) may cause online intermediaries to over-comply with content removal requests, leading to the possible take-down of legally protected speech at the behest of frivolous user complaints. Empirical studies conducted on Indian intermediaries have demonstrated that smaller intermediaries lack the capacity and resources to make complex legal determinations of whether the content complained against violates the standards set out in Rule 3(1)(b)(i)-(x), while larger intermediaries are unable to address the high volume of complaints within short timelines – leading to the mechanical takedown of content. We suggest that any requirement that online intermediaries address user complaints within short timelines could differentiate between types of content that are ex-facie (on the face of it) illegal and causes severe harm (e.g., child-sex abuse material or gratuitous violence), and other types of content where determinations of legality may require legal or judicial expertise, like copyright or defamation.

Need for specificity in defining due diligence obligations

Rule 3(1)(m) of the proposed amendments requires intermediaries to ensure a “reasonable expectation of due diligence, privacy and transparency” to avail of safe harbour; while Rule 3(1)(n) requires intermediaries to “respect the rights accorded to the citizens under the Constitution of India.” These rules do not impose clearly ascertainable legal obligations, which may lead to increased compliance burdens, hamper enforcement, and results in inconsistent outcomes. In the absence of specific data protection legislation, the obligation to ensure a “reasonable expectation of due diligence, privacy and transparency” is unclear. The contents of fundamental rights obligations were drafted and developed in the context of citizen-State relations and may not be suitable or aptly transposed to the relations between intermediaries and users. Further, the content of ‘respecting Fundamental Rights’ under the Constitution is itself contested and open to reasonable disagreement between various State and constitutional functionaries. Requiring intermediaries to uphold such obligations will likely lead to inconsistent outcomes based on varied interpretations.

[July 15-22] CCG’s Week in Review: Curated News in Information Law and Policy

The National Investigation Agency Act was amended by Parliament this week, expanding its investigation powers to include cyber-terrorism; FaceApp’s user data privacy issues; and the leaked bill to ban cryptocurrencies— presenting this week’s most important developments in law and tech.

Aadhaar

  • [July 15] Govt plans Aadhaar based identification of patients to maintain health records, Live Mint report; The Indian Express report.
  • [July 15] Petition in Delhi HC seeking linking of Aadhaar with property documents, Live Mint report.
  • [July 15] Government stops verification process using Aadhaar for driving license, The Economic Times report.
  • [July 15] Government stops verification process using Aadhaar for driving license: Nitin Gadkari, ET Auto report.
  • [July 18] Will Aadhaar interchangeability for ITR make PAN redundant? Live Mint report.
  • [July 18] Govt floats idea for Aadhaar-like database for mapping citizen health, Business Standard report; Money Control report; Inc42 report.
  • [July 19] Linking Aadhaar with Voter ID— Election Commission to decide within weeks, The Print report; India Legal analysis.
  • [July 21] Mumbai man fights against linking Aadhaar to salary account, The Quint report.
  • [July 21] Violating SC rules, matrimonial site sells love, marriage using Aadhaar data, National Herald report.
  • [July 22] Large cash deposits may soon need Aadhaar authentication, Times of India report; Money Control report.

Right to Information

  • [July 19] Bill to amend RTI law introduced in Lok Sabha amid opposition, India Today report.

Free Speech

  • [July 18] Ajaz Khan of Big Boss fame arrested by Mumbai Police for TikTok video, The Asian Age report; DNA India report.
  • [July 19] Guwahati HC grants anticipatory bail to poets accused of writing communally charged poetry on Assam citizenship crisis, Live Law report.

Internet Governance

  • [July 16] MeitY to finalise Intermediary Liability rules amendment by month end, Medianama report; Inc42 report.

Data Protection and Data Privacy

  • [July 17] Canada probing data theft at military research center: reports, Business recorder report.
  • [July 17] BJP raises issue of privacy breach by tech devices in Rajya Sabha, BJD demads more funds, News 18 report.
  • [July 17] TMC MPs protest outside Parliament in Delhi, demand to bring Data Protection Law, DNA India report.
  • [July 17] Democrats issue warnings against viral Russia-based face-morphing app ‘FaceApp’, NPR report.
  • [July 18] Government notice to Tiktok, Helo; asks to answer 21 questions or face ban, Gadgets Now report; Medianama report; Business insider report.
  • [July 18] Singapore data protection enforcement guide released, Asia Business law Journal report.
  • [July 18] Irish Data Protection Commission issues advice over FaceApp privacy concerns, RTE report.
  • [July 18] Govt admits to data leak of unemployment figures ahead of May announcement in Rajya Sabha, terms the issue ‘serious’, Firstpost report.
  • [July 19] From bad to worse: PM Modi’s office has asked IT Ministry to keep a close eye on TikTok, India Times report.
  • [July 20] Equifax near $700 million settlement of data breach probes: WSJ, AL Jazeera report.
  • [July 21] Jio backs data protection; highlights future growth areas like agriculture, healthcare and education, The Economic Times report.

Data Localisation

  • [July 19] Firms exploring Telangana to set up data centres, The Hindu report.
  • [July 22] Bytedance starts building local data centre in India after lawmakers complain of data privacy, Entrackr report.
  • [July 22] China’s ByteDance to store Indian data locally after MPs raise concerns on privacy, national security, ET Tech report; Outlook report.
  • [July 22] Jio backs data localization to stave off cyberattacks, ET Tech report; Medianama report.

Digital India

  • [July 15] India lags peers in tech skills: Coursera study, ET Telecom report.
  • [July 16] WiFi on the go: Government pushes to keep Bharat connected, ET Telecom report.
  • [July 17] BMTC wants to reboot its IT plan, ET tech report.
  • [July 19] How improved infrastructure and tech firms are changing game development in India, ET Tech report.

Digital Payments and E-Commerce

  • [July 14] How women are sidelined in India’s e-commerce growth, ET Tech report.  
  • [July 17] Digital payment firms write to Government, asking compensation for losses incurred due to ‘zero’ merchant fee, Latestly report.
  • [July 22] How an in-house e-commerce platform Leaf Era has revolutionsed government procurement, ET Tech report.
  • [July 22] Aditya Birla Payments Bank to shut down due to “unanticipated developments in business landscape”, Medianama report.

Cryptocurrency

  • [July 15] Hacked crypto exchange Bitpoint discovers more millions are missing, Coin Desk report.
  • [July 15] India: Leaked draft bill would ban all crypto except ‘Digital Rupee’, Coin Telegraph report.
  • news to those Swiss authorities, Business Insider report.
  • [July 16] US says cryptocurrency is a national security issue, The New Indian Express report.
  • [July 16] Bitcoin and crypto suddenly branded a national security issue, Forbes report.
  • [July 16] Crypto a security threat, instrument for illicit activities: Trump admin, Business Standard report.
  • [July 17] Facebook said its Libra cryptocurrency will be regulated by Swiss authorities – but that was
  • [July 17] Making sense of chaos? Algos scour social media for clues to crypto moves, ET Markets report
  • [July 20] Cryptokart: Another Indian crypto exchange shuts doen operations, Coin Telegraph report.
  • [July 22] Crypto-attacks are rising in Asia—and cybersecurity AI may be the best way to fight the threat: Darktrace, Business Insider report.

Emerging Tech

  • [July 13] Facial recognition tech is growing stronger, thanks to your face, New York Times report.
  • [July 19] Is there a tug of war between Niti Aayog, IT Ministry on artificial intelligence project? India Today report.

Big Tech

  • [July 15] Tech giants to face US hearings on anti-trust, cryptocurrency, ET Telecom report.
  • [July 15] Amazon Web Services still on pole for $10bn defence cloud deal after Oracle case crashes, DataEconomy.com report.
  • [July 16] Google accused of ripping off digital ad technology in US lawsuit, ET Telecom report.
  • [July 19] EU opens investigation into anti-competitive conduct of Amazon: Will it face heat in India too? Entrackr report.

Telecom/5G

  • [July 17] Govt working on revival of BSNL: Minister tells Lok Sabha, The Hindu Business Line report.
  • [July 19] Make in India: Only half of country’s 268 cellphone makers stay afloat, Financial Express report.

More on Huawei

  • [July 16] The US Congress wants to block the Trump administration from weakening Huawei restrictions, The Verge report.
  • [July 17] US-China talks stuck in rut over Huawei, The Wall Street Journal report.
  • [July 19] Two-thirds of Canadians reject closer ties to China and want Huawei banned from 5G networks, poll says, South China Morning Post report.
  • [July 20] White House to host meeting with tech executives on Huawei ban: report, Business Standard report.  

Cybersecurity

  • [July 15] Use Indian IPRs to ensure telecom network security: Trade group. ET Telecom report.
  • [July 15] Indian IT managers facing budget crunch for cybersecurity, Live Mint report
  • [July 16] Your WhatsApp, Telegram files can be hacked: Symantec, ET Telecom report.
  • [July 16] IT companies tightening salary budgets, leveraging variable pay for niche skills, ET Tech report.
  • [July 17] Druva acquires hybrid data protection form CloudLanes, The Economic Times report.
  • [July 17] Indian Army launches massive crackdown on personnel violating its cybersecurity norms, The Print report.
  • [July 19] NSO spyware targets phones to get data from Google, Facebook, iCloud: Report, Medianama report.
  • [July 20] New bills on cybersecurity, crime against women soon: Union Minister, India Today report; The Indian Express report.
  • [July 21] An entire nation just got hacked, CNN report.
  • [July 22] Fix Rogue audits; guard Indian data; bulletproof 5G: India’s new cybersecurity chief’s Vision 2020, ET Prime report.
  • [July 22] Fake FaceApp software may infect your device, says global cybersecurity company Kaspersky Lab, New Nation report.

Tech and Elections

  • [July 14] New election systems use vulnerable software, AP News report.

Tech and Law Enforcement

  • [July 12] Revealed: This is Palantir’s Top-Secret User Manual for Cops, Vice Motherboard report.
  • [July 22] WhatsApp traceability case: Details of data requests made by Tamil Nadu Govt to social media companies, Medianama report.

Tech and Military

  • [July 14] French jetpack man flyboards up Champs-Elysees for Paris Parade, RFI report.
  • [July 15] Dassault offset money to help in skill training: FM Nirmala Sitharaman, Money Control report. Economic Times report.
  • [July 16] Modi Govt to buy Pilatus trainer aircraft following corruption charges, to ban Swiss defence firm for one year, OpIndia report.
  • [July 16] If India chooses F-21, it will plug into ‘world’s largest fighter plane ecosystem’: Lockheed Martin, The Economic Times report.
  • [July 17] AI has a bias problem and that can be a big challenge in cybersecurity, CNBC report
  • [July 17] IAF on spares buying spree, The Quint report.
  • [July 19] Lockheed Martin identifies 200 potential Indian partners, Hindustan Times report.
  • [July 18] Navy to buy Rs. 1,589 crore satellite from ISRO, The Economic Times report.
  • [July 18] Indian MoD issues RFP for heavyweight torpedoes for Kalvari-class submarines, Jane’s 360 report.
  • [July 18] Rafale will provide IAF strategic deterrence: Defence Ministry, Money Control report
  • [July 19] US F-35, poster child for ineptitude, inefficiency, The Middle East Monitor report.
  • [July 19] South African Council to collaborate with Indian defence industry, Outlook India report.
  • [July 20] DRDO carries out a dozen successful summer trials of NAG anti-tank missile, ANI report.
  • [July 21] IAF Pilots could soon fly Tom Cruise;s fighter jet from Top Gun Maverick, News 18 report.
  • [July 21] India to forge ahead with Russia accord despite US threat of sanctions, DNA India report.

National Security Legislation

  • [July 15] Lok Sabha passes bill that gives more powers to NIA, Live Mint report, ANI report.
  • [July 15] Lok Sabha passes NIA Amendment Bill to give more power to anti-terror agency; here’s all you need to know, Business Insider report.
  • [July 17] What is the National Investigation Agency Bill and why is it in contention?, Money Control report.
  • [July 17] Rajya Sabha passes National Investigation Agency Amendment Bill 2019, Live Mint report; Outlook India report.
  • [July 18] Cabinet asks finance panel to consider securing non-lapsable funds for defence, The Indian Express report; Financial Express report.
  • [July 20] New bills on cybersecurity, crime against women soon: Union Minister, India Today report; The Indian Express report.

Opinions and Analyses

  • [July 11] Ryan Gallagher, The Intercept, How US Tech giants are helping build China’s Surveillance state.
  • [July 15] Jemima Kelly, Financial Express, Trump v Crypto: rage against the obscene.
  • [July 15] Ravi Shanker Kappor, News 18 Opinion, Cost of not carrying out economic reforms: Acute shortage of funds for military modernisation.
  • [July 16] Jayshree Pandya, Forbes, Nuances of Aadhaar: India’s digital identity, identification system and ID.
  • [July 16] Binoy Kampark, International Policy Digest, The UN’s free speech problem.
  • [July 16] K Satish Kumar, DNA India, Need more clarity on data bill.
  • [July 16] Abhishek Banerjee, Swarajya, Richa Bharti: The Free Speech Hero India Needs.
  • [July 17] Ananth Krishnan, The Print, Three reasons why it’s not Huawei or the highway for India’s 5G future.
  • [July 17] Rajesh Vellakat, Financial Express, Personal Data Protection Bill: Will it disrupt our data ecosystem?
  • [July 17] Nouriel Roubini, Live Mint Opinion, Seychelles-based BitMEX and the great crypto heist.
  • [July 17] Tim O’Reilly, Quartz, Antitrust regulators are using the wrong tools to break up Big Tech.
  • [July 18] Tiana Zhang, Jodi Wu, Yue Qiu and Richard Sharpe, Mondaq, Newly released draft measures on data security management strengthen China’s data protection framework.
  • [July 18] Gwyn D’Mello, India Times, If you worry about FaceApp and not your Facebook and Aadhaar, you have bigger problems.
  • [July 18] Sue Halpern, The New Yorker, How Cyber Weaqpons are changing the landscape of modern warfare.
  • [July 19] TV Mohandas Pai and Umakant Soni, Financial Express, An AI innovation engine for New India.
  • [July 20] Amit Cowshish, The Tribune, Indo-US defence trade not free from encumbrances.
  • [July 20] Umberto Sulpasso, Eurasia Review, Domestic Knowledge Product: Enhancing Wealth, Welfare and National Security—Analysis.
  • [July 20] Tiffancy C Li, The Atlantic, FaceApp makes today’s privacy laws look antiquated.
  • [July 20] Tom Robinson, Venture Beat, Crypto can prevent money laundering better than traditional finance.
  • [July 21] Vimal Kumar Kashyap, The Pioneer, 5G to usher in fourth industrial revolution.
  • [July 21] Michael Ashley, Forbes, It’s time to fight back for data sovereignty.
  • [July 22] Vidushi Marda, The Hindu, Facial recognition is an invasive and inefficient tool.

The Supreme Court’s Free Speech To-Do List

Written by members of the Civil Liberties team at CCG

The Supreme Court of India is often tasked with adjudicating disputes that shape the course of free speech in India. Here’s a roundup up of some key cases currently before the Supreme Court.

Kamlesh Vaswani vs. Union of India

A PIL petition was filed in 2013 seeking a ban on pornography in India. The petition also prayed for a direction to the Union Government to “treat watching of porn videos and sharing as non-bailable and cognizable offence.”

During the course of the proceedings, the Department of Telecommunications ordered ISPs to block over 800 websites allegedly hosting pornographic content. This was despite the freedom of expression and privacy related concerns raised before the Supreme Court. The Government argued that the list of websites had been submitted to the DoT by the petitioners, who blocked the websites without any verification. The ban was revoked after much criticism.

The case, currently pending before the Supreme Court, also presented implications for the intermediary liability regime in India. Internet Service Providers may claim safe harbor from liability for content they host, as long as they satisfy certain due diligence requirements under Sec. 79 of the IT Act, read with the Information Technology (Intermediaries Guidelines) Rules, 2011. After the Supreme Court read down these provisions in Shreya Singhal v. Union of India, the primary obligation is to comply with Court orders seeking takedown of content. The petition before the Supreme Court seeks to impose an additional obligation on ISPs to identify and block all pornographic content, or risk being held liable. Our work on this case can be found here.

Sabu Mathew George vs. Union of India

This is a 2008 case, where a writ petition was filed to ban ‘advertisements’ relating to pre-natal sex determination from search engines in India. Several orders have been passed, and the state has now created a nodal agency that would provide search engines with details of websites to block. The ‘doctrine of auto-block’ is an important consideration in this case -in one of the orders the Court listed roughly 40 search terms and stated that respondents should ensure that any attempt at looking up these terms would be ‘auto-blocked’, which raises concerns about intermediary liability and free speech.

Currently, a note has been filed by the petitioners advocate, which states that search engines have the capacity to takedown such content, and even upon intimation, only end up taking down certain links and not others. Our work on this case can be found on the following links – 1, 2, 3.

Prajwala vs. Union of India

This is a 2015 case, where an NGO (named Prajwala) sent the Supreme Court a letter raising concerns about videos of sexual violence being distributed on the internet. The letter sought to bring attention to the existence of such videos, as well as their rampant circulation on online platforms.

Based on the contents of the letter, a suo moto petition was registered. Google, Facebook, WhatsApp, Yahoo and Microsoft were also impleaded as parties. A committee was constituted to “assist and advise this Court on the feasibility of ensuring that videos depicting rape, gang rape and child pornography are not available for circulation” . The relevant order, which discusses the committee’s recommendations can be found here. One of the stated objectives of the committee was to examine technological solutions to the problem – for instance, auto-blocking. This raises issues related to intermediary liability and free speech.

Information Gatekeepers and Article 19(1)(a) of the Indian Constitution

I have put a draft of my paper titled ‘Gatekeeper Liability and Article 19(1)(a) of the Indian Constitution on SSRN. You can read it here. It will eventually be published in the NUJS Law Review.

Alternatively, this essay (written for a UPenn/ CIS/ ORF publication) based on the paper sets out my argument briefly.

Introduction

The press was once the most important medium of mass communication. Indira Gandhi understood this well and used the gatekeeping function of large media houses to prevent citizens from accessing critical information. The press’s function as an information gatekeeper is protected by jurisprudence, but this protection is articulated as ‘freedom of the press’, making it a medium-specific protection. As the Internet increasingly replaces the press as the most important source of information for citizens, structural protections need to extend online. The online intermediary may be the new avatar of the information gatekeeper, third parties who perform an essential function in transmitting information from speakers to audiences – they are potential choke points that the state can use to cut off flows of information.

Aside from the press freedom norms, much of our freedom of expression jurisprudence deals with the state’s relationship with the speaker. The contours of our freedom of expression rights have formed in this context. It is relatively easy for the judiciary to grasp how statutory provisions like section 66A of the Information Technology Act impact freedom of expression. Here the law targets the speaker directly and any unjust application or chilling effect is more visible. It is also more likely to be resisted by the target of regulation, since the speaker is always interested in her own right to speak.

Indirect regulation of speech is quite different. The law is aimed at information gatekeepers, who may choose not to publicise censorship and who may not be as interested in protected the speech as the original speaker. Scholars have described these gatekeepers as the ‘weakest link’, through which speech is most vulnerable to state excesses.

Information gatekeepers and Indian law

It is common enough for states to use ‘middle-men’ to enforce change in behaviour when it is difficult to control the primary offender’s conduct directly. For example, since it is difficult to directly compel minors to avoid drinking, the law targets alcohol-sellers, leveraging their gatekeeping function to cut off the supply of alcohol to minors.

Information gatekeepers were used to regulate the flow of information even in the pre-digital world. Publishers and booksellers were held liable for circulating banned publications in many countries including India. India has a particularly pernicious rule criminalizing the circulation of obscene content. This comes from the Supreme Court’s judgment in Ranjit Udeshi v. State of Maharashtra, that is well known for its interpretation of obscenity law in the context of D.H. Lawrence’s ‘Lady Chatterley’s Lover’. The other critical element of this judgment received almost no attention – the liability of a bookseller for the circulation of obscene content.

D.H. Lawrence was never prosecuted in India for his book. The ‘Lady Chatterley’s Lover’ case in the Supreme Court was about the liability of the owners of Happy Book Stall, a bookshop at which ‘Lady Chatterley’s Lover’ was sold. The Supreme Court said the booksellers were liable for circulation of the obscene content even if they argued that they were unaware that a book contained such content. Consider what this means: booksellers cannot plead ignorance of obscene content within any of the books they sell, and will be liable nonetheless. The state only has to prove that the booksellers circulated obscene content, and not that they did so knowingly. It is lucky that this part of the Supreme Court judgment went largely unnoticed since it could easily be used by the intolerant file criminal complaints that shut down large bookstores all over the country – all they need to do is look for a few books that the law would categorise as obscene. Booksellers would then have to scour every page and paragraph of each book they sell to weed it out content that might get them arrested – this would make it very difficult to do business.

Online intermediaries as information gatekeepers

Intermediary liability first received attention in India after the infamous ‘DPS-MMS’ explicit video, featuring two minors, ended up being sold on Baazee.com. The Managing Director of the company that owned the website was arrested. The fact that he had no knowledge that this content was shared on the website was irrelevant thanks to the Supreme Court’s ‘Lady Chatterley’s Lover’ verdict. This situation made it clear that if the law applicable to bookshops continued to apply to online intermediaries, online platforms would not be able to function in India. A platform like Facebook or Youtube hosts too much user content to be able to sift through it and proactively filter out everything obscene.

Fortunately, the amendment of the Information Technology Act (IT Act) gave Internet intermediaries immunity from this liability for third party content. The immunity was conditional. Intermediaries that edit or otherwise have knowledge of the content that they transmit are not immune from liability. To remain immune from liability, intermediaries must comply with certain legal obligations to take down content or block it in response to government orders or court orders. These obligations also leverage the gate-keeping function of these intermediaries to regulate online content – internet service providers and online platforms can ensure that certain kinds of content are inaccessible in India.

Why gatekeepers matter

Although information intermediaries existed in the pre-internet information ecosystem, their role is critical in the context of online content – several intermediaries mediate our access to online content. Some of these, like the gateways through which the Indian network connects to the global network, are located in India and are easy for the government to control since they are subject to onerous licenses and are few enough in number for the state to be able to control all of them successfully. Other intermediaries like Facebook or Google, are online platforms, and most of these have offices outside Indian jurisdiction.

Discussions about freedom of expression that focus on the direct relationship between the state and the speaker are not helpful in this context. This kind of reasoning tends to ignore the collateral effects of certain kinds of regulation of speech – the ‘Lady Chatterley’s Lover’ case case is a classic illustration of this with its tremendous impact on the liability of all booksellers and later on Baazee.com and other web based platforms.

As the new media make gatekeepers and intermediaries more critical to the controlling the flow of information, we need to focus on other dimensions of freedom of expression if we are ensure that effective safeguards are put in place to protect speech. Our jurisprudence on freedom of the press offers some degree of protection to newspapers so that regulation of their business structure cannot be used to influence their content, but this form of gatekeeper protection is limited to the press. There are information gatekeepers other than the press in India, and it is time that we think carefully about protecting the information ecosystem. Free speech principles need to accommodate themselves to a media ecosystem that is increasingly dependent on information gatekeepers.

Freedom of expression and access to information

It is time that our jurisprudence started focusing more on citizens’ rights to access information. Although this right that has been recognized in India, it needs to be outlined in more detail. In the well-known judgment in Shreya Singhal v. Union of India, which struck down section 66A of the Information Technology Act, the Supreme Court failed to deal with intermediary liability adequately because it did not use the lens of access to information and gatekeeper liability. Using traditional jurisprudence that focuses on the direct impact of regulation of speech, the court gave content-creators the right to a hearing and a right to appeal blocks and removals of their content wherever possible. However, it completely disregarded the rights of citizens to access online content.

The content blocking system in India makes all government blocking orders confidential. This means that when an intermediary is required to block content under the IT Act, users might imagine that the decision was a private decision made by the intermediary. Since the intermediary is unlikely to be willing to spend resources battling for the various kinds of content it hosts, any blocking process that counts on the intermediary to offer up sufficient resistance to unconstitutional blocking orders errs egregiously. The law must offer those who are actually affected – the publishers and the readers of the information – a chance to fight for content that they have the right to circulate and access. Of these, the publishers of information do have some right to make their case before the government committee making the blocking decision thanks to the Supreme Court’s decision in Shreya Singhal v. Union of India. But this judgment does nothing for citizens who could lose access to a wealth of information if the government might unreasonably blocks content created by someone in another country. The content publisher would not be in a position to defend its content in India, and citizens have not been given any avenue to defend their rights to view the content before the government committee making the decision.

The focus on access to information has been discussed many scholars, from Alexander Meiklejohn onwards. Amartya Sen has written about the salience of public discourse in a democracy. Robert Post and Jack Balkin have articulated in the detail the importance of focusing on the free flow of information or access to information, rather than on the right of individual speakers. The right we refer to as ‘freedom of expression’ is about much more than the freedom to say what one pleases. It is the foundational principle from which our rules about free flow of information have been built.

Conclusion

Section 66A was an example of what Jack Balkin characterises as ‘old school’ regulation of speech. This consists of criminal penalties, injunction and damages aimed directly at the speaker or publisher. The Supreme Court’s treatment of section 66A reflects its comfort with this form of regulation and its implications for freedom of expression.

Intermediary liability, and the use of Internet gatekeepers to control the flow of online information follows a different system: it uses control over the infrastructure or platforms of speech to exercise control over speech. Jack Balkin characterizes this as ‘new school’ regulation. Through ‘collateral censorship’, a third party is made to block or remove a primary speaker or publisher’s speech. For example, a government order or a court order requiring that certain online content be blocked, does this by requiring and internet service provider or online platform to censor the information. New school regulation works necessitates co-operation of these third party intermediaries like internet service providers and online platforms with the government, and this can be achieved by compelling them to co-operate through the law or by using softer means to co-opt them.

New school regulation must be assessed in terms of the collateral harm that it causes. It is not a question of whether online pornography should be blocked or not anymore. It is a question of whether the process used to get intermediaries to block the pornography can be abused to block constitutionally protected speech. We have already recognized the collateral effects of structural regulation in the context of press freedom, and the Supreme Court has barred certain kinds of structural interference with the media that might impact their reporting. It is time to create a version of this principle for online speech, and to think in terms of access and free flow of information.

References

Ranjit Udeshi v. State of Maharashtra

Shreya Singhal v. Union of India

Secretary, Ministry of Information & Broadcasting, Govt. of India v. Cricket Association of Bengal, (1995) 2 SCC 161.

Sakal Papers v Union of India

Amartya Sen, Idea of Justice, 321-337 (2009)

Chinmayi Arun, Gatekeeper Liability and Article 19(1)(a) of the Constitution of India, NUJS Law Review [forthcoming-2015]

Jack Balkin, ‘Old School/ New-School Speech Regulation’, 127 Harv. L. Rev. 2296

Jack Balkin, ‘The first amendment is an information policy’, Hofstra Law Review 41 (2013)

Robert Post, Participatory Democracy and Free Speech, 97 Virginia L. Rev. 3 (2011).

Seth Kreimer, Censorship by Proxy: the First Amendment, Internet Intermediaries, and the Problem of the Weakest Link, Penn Law: Legal Scholarship Repository (2006)

We are not a totalitarian state and cannot be asked to moral police: AG tells SC in the Porn Petition

The Kamlesh Vaswani matter that has asked the Court to direct the Government to block all pornography over the Internet and was used conveniently by the Government to order the disablement of over 850 website last week came up before the bench headed by the Chief Justice today.

Mr. Mukul Rohatgi, the Attorney General of India represented the Union along with ASG Ms. Pinky Anand. He stated that after the last hearing the petitioners gave a list of 857 websites to the Government, which were blocked by the Department without any verification. Subsequently, the Department verified the list and then asked the Internet Service Providers to only block websites with child porn.

He said that if someone wants to watch porn in the confines of their bedroom how can we interfere. He subsequently stated that there are various kinds of pornography- hard-core, soft, violent porn. At this juncture the Chief Justice quipped ‘Mr. Rohatgi how do you know all this’, to which the AG replied ‘your lordship I have not watched it, but I do plan to, since that is the only way to understand the concerns, moreover my juniors are very tech savy’. He also stated that pornography is a grey area and there are no straight answers.

The AG argued that the geographical frontiers are no frontiers on the Internet and it is a borderless space, and it is very difficult to block anything on it. He said that if we block ten sites another five would pop up with new names at new locations.

He again reiterated that if someone wants to watch this in the privacy of their bedroom, the state cannot be a moral police or enter peoples bedrooms. He added that there are issues of freedom of speech and expression under article 19(1)(a) involved in the case.

The AG stated that the Government is committed to Internet freedom and has launched the ambitious Digital India project, which aims to connect crores of people to the Internet. He added that the MyGov website is becoming the converging point of lots of people to send ideas to the Government and connect with it.

He stated that the petitioner’s argument that the law as it stands today is ineffective and thus unconstitutional is not a valid argument. He cited an example of a case of customs officers in Mumbai who were booked as they were watching porn within a bungalow and the High Court acquitted them, as that is not an offence. He stated that the case is pending before the Supreme Court.

The AG submitted that there is no doubt that child porn has to be banned and the Government will make every effort to do that. However, he added that effective banning on the Internet is very difficult. He also stated that sites these days pick up profiles of the surfer and then the person gets targeted advertising, which the Government cannot block and we are not willing to take up the mantle of moral policing.

He stated that as of today our position is that child porn needs to be banned, beyond that if someone gives us a site which has child porn we will block it or block sites based on court orders as laid down in Shreya Singhal. Beyond that this issue is a larger debate, which can happen in court or outside it in Parliament or in the society.

At this point Mr. Vaswani’s lawyer, Vijay Panjwani stated that a criminal activity is a criminal activity whether in public or in private and transmission of pornography is a criminal activity, which the ISPs are doing. He added that we have the technology to block these sites, parties including people from Banaras Hindu University have filed affidavits in the case stating the same. He said that there is a seven-years imprisonment for anyone transmitting absence content under the Information Technology Act, but we are not even asking for that, we are just asking for the sites to be blocked.

The Attorney General interrupted at this point and stated that today every computer has a child/parental lock and can be used to limit children’s access to such sites and there are other softwares being developed for mobiles and other devices.

He added that the best filter is not to block this content at the gateway but if individuals want they can do it on their devices, as if two adults want to watch it for entertainment we cannot enter their houses. The AG further stated that we cannot become a totalitarian state and there is right to information and entertainment under article 19(1)(a).

Ms. Meenakshi Arora, Senior Advocate being briefed by AoR Mr. Rahul Narayan appearing for the Internet Service Providers Association of India argued that there is a conflict between the two orders issues by the DoT on 31st July and 4th August 2015. She added that ISPs are not responsible for the content added by intermediaries.

The first order asked ISPs to disable 857 websites, but the subsequent orders asked us to unblock 857 URLs. She stated that there is a difference between a website and a URL and added that we will block whatever we are asked to under the law and our license agreements but we need to have clear instructions regarding what needs to be blocked. The Government cannot ask us to identify and block content and that is unreasonable burden on the ISPs.

The CJI speaking for the bench stated that we have never passed any order in this case or asked for any blocking. The AG agreed with the Court and stated that since there has been no order by the Court, he will ask the Department of Telecom (DoT) and Department of Electronics and Information Technology (DeitY) to meet with ISAPI and sort out the difficulties.

The AG informed the court that they are working on some self-regulation mechanisms and requested to court to list the matter in October.

(Sarvjeet is a Senior Fellow & Project Manager at the Centre)

The PornBan debate: our archived pieces on the subject

Sadly, the debate on banning pornography has not moved very far over the last two years. Here are pieces that CCG has published on the subject over time:

  1. The problem with blanket bans of  online pornography: filtering online content
  2. Blocking online pornography: who should make constitutional decisions about speech
  3. Porn and keyword filters, and how we will be sacrificing our public discourse (within this piece on the AIB petition)

Delfi v Estonia: Dangers of Collateral Censorship

By Vasudev Devadasan

The issue of whether web platforms, such as Facebook, Google or Reddit are liable for third party content posted by users and other third parties is one of the most significant issues confronting the freedom of expression in the digital era. This is especially delicate when users post content that might constitute hate speech, or involve other privacy and copyright violations. When should an intermediary be held liable for such content by its users? Most countries recognise the power of the Internet to contribute to public debate and offer various degrees of ‘safe harbour’ to intermediaries, exempting them from liability for user generated content. In the United States this is recognised by Section 230 of the Communications Decency Act and Section 512 of the Digital Millennium Copyright Act which provide almost blanket immunity to intermediaries. In Europe, Article 14 and 15 of the Electronic Commerce Directive protect intermediaries operating as mere technical conduits from liability up till the point of ‘actual knowledge’ of the offence content. Under Article 14 (2) of the Directive, an intermediary is required to expeditiously take down offensive content once it is aware of it. Yet despite these guidelines, Europe’s highest court of appeal for human rights, the Grand Chamber, upheld the imposition of liability on the online news portal Delfi, for anonymous comments on its website.

Delfi’s Methods and the Facts

Delfi is one of the largest news portals on the Internet in Estonia. On 24 January 2006 Delfi published an article titled, “SLK Destroyed Planned Ice Read.” In less than 24 hours the article attracted 185 comments with over 20 containing personal threats and offensive language against the SLK’s sole and majority shareholder at the time. His lawyers requested Delfi to take down the offensive comments, which they did immediately, and sued for half a million Estonian Kroons. 

Delfi’s status as an online news portal, the third party nature of the comments and the immediate removal of the comments would appear to have exempted it from liability. In addition Delfi had a system that automatically screen for offence words that incited hatred and any reader could mark a comment as insulting, following which it would be removed expeditiously.

The Original Judgement of the First Section

The role of any chamber of the European Court of Human Rights, as is constantly reiterated through its judgements, is not to examine domestic or European law and interpret accordingly, but rather to ascertain whether the effects of the judgement of national courts are compatible with the European Convention of Human Rights. Thus, when examining any restriction of free speech, the court will analyse the situation on a case by case basis and determine whether the restriction is (i) provided by law, (ii) pursues a legitimate aim, and (iii) is necessary in a democratic society.

For a restriction to be provided by law, the law restricting freedom of expression must be reasonably foreseeable. When examining Delfi’s claim, the original Chamber found that the as a professional publisher, Delfi should have been familiar with the domestic legislation, and, if they had sought legal counsel, it would have been reasonably foreseeable that they, as an intermediary under Estonian laws, would have been held liable. Given that the defamatory nature of the comments was not disputed by either party, the legitimate aim was naturally the protection of the rights and reputation of others. When determining whether the said restriction was necessary in democratic society, the chamber gave great weight to the fact that Delfi was the only party who could control the comments once they were posted. The original chamber also thought it disproportionate to put the onus of identifying the original authors of the comments on the aggrieved party, given the technical difficulties involved of locating people on the internet. In light of this, and the negligible and civil nature of the fine imposed on Delfi, the original chamber ruled that the restriction on free speech was in fact proportional and there was no violation of Article 10.

The Decision of the Grand Chamber

The outcome of the Grand Chamber judgement was contingent on two broad principles, firstly, they agreed with the original Chamber judgement in characterising Delfi as possessing the level of responsibility usually attributed to publishers due to the control they exercised over the comments in question. But the second line of reasoning was unique to the Grand Chamber, and raises some troubling implications for intermediary liability. The Grand Chamber imposed a higher burden imposed on Delfi due to the ‘clearly unlawful’ nature of the comments.

The court notes with some delicacy that there are differing legal principles that apply to an Internet based media operation as opposed to traditional print media, and even calls for a “graduated and differentiated” application of liability. However, ultimately it follows in the footsteps of the original Chamber judgement and uses two sets of facts to hold that an online portal can be held, in principal, liable for the third party comments on its site. Firstly, it gives great weight to Delfi’s economic interest, and notes that that due to advertising on the site, Delfi had an adverse interest in generating comments. Secondly, it again points to the fact that once a comment was posted, even a user could not remove it, Delfi was the only one capable of doing this. The court uses this to point to the level of control that existed over the comments. This added responsibility on the portal due to the existence of an economic interest does not take into account the costs of the website in generating public debate and as is eloquently noted in the dissenting opinion of Judges Sajo and Tsotsoria, “information is not cheap.” Additionally, a commenter on Delfi’s site would have neither been an employee of the publisher, and in all probability not even known to the publisher. Additionally, with the large number of comments flowing into websites, the control that both the Chamber and Grand Chamber concludes Delfi to have, presupposes a level of knowledge that in most cases intermediaries do not have.

The second, and novel, aspect of the Grand Chamber’s judgement is that it imposes a higher burden on intermediaries in cases where third party content is ‘clearly unlawful’. The court holds the comments to be manifestly unlawful and argues that some comments are incompatible with the values of the European Convention of Human Rights itself, and thus in violation of Article 17. Previously, the type of speech condemned under Article 17 have been holocaust denial, justification of pro-Nazi policies and portraying Jews as the source of evil in Russia. However, throughout the judgement the court uses the legitimate aim of ‘the reputation and rights of others’ to justify the restriction on free speech. This is inconsistent with the analysis provided by the court itself in the past, because speech that is hit by Article 17, is deemed to have no value and acquires no protection under Article 10, thus rendering meaningless the three part test and the proportionality analysis provided in the judgement. Thus without a detailed analysis of the comments on Delfi’s site, the Grand Chamber has basically imposed what amounts to strict liability on intermediaries for these ‘clearly unlawful’ comments, without clearly delineating what aspects of the comments meet this standard. At heart, Delfi originally began with a defamation claim, and nine years later the Grand Chamber’s re-characterisation of the comments as hate speech leaves much ambiguity as to whether defamatory material, or what other content, would also result in an online portal being held liable for such content merely existing on their website without their knowledge.

Perhaps the dissenting opinion in the judgement best highlights the dangers that this could result it for the future of intermediary liability. The latest judgement by the Grand Chamber requires intermediaries to remove ‘clearly unlawful’ content as soon as it is published, when they may not even have actual knowledge of its existence. The only way to do this is to ensure round the clock supervision. This may even result in smaller online sites who cannot afford such supervision to discontinue comments on their sites. Even larger sites, naturally wary of legal disputes, are likely to implement overbroad forms of censorship of third party content on their sites, leading to collateral censorship. In 2011 the Human Rights Council’s Special Rapporteur stated that “no one should be held liable for content on the Internet of which they are not the author, and no State should use or force intermediaries to undertake censorship on its behalf.” The imposition of liability on Delfi has certainly created a significant amount of uncertainty regarding the principles of intermediary liability and only time will tell whether it is the exceptional circumstances of this case that merited such a result, or this signals a departure from the ‘actual knowledge’ and safe harbour standards that intermediaries currently enjoy.

Some other analysis of the Judgment are available below:

Delfi AS v. Estonia: Grand Chamber confirms liability of online news portal for offensive comments posted by its readers (Strasbourg Observers)

Delfi v. Estonia: Privacy Protection and Chilling Effect (Verfassungs Blog)

European Court of Human Rights Undermines Essential Protections for Free Expression Online (Centre for Democracy & Technology)

The Delfi AS vs Estonia judgement explained (LSE Media Policy Blog)

European Court strikes serious blow to free speech online (Article 19)

Delfi AS v. Estonia: a blow to free expression online (Access)

(Vasudev Devadasan, an intern at CCG is a third year student at Jindal Global Law School and a former Price Moot alumnus where he worked on the issues of hate speech and intermediary liability)

Freedom of Speech & Google Search- Preliminary Notes for India: Working Paper by Ujwala Uppaluri

As the Internet progressively becomes a key means by which information is communicated and exchanged, there is a growing need to examine how the applications that facilitate access to these troves of information operate.

Search engines have come to play a critical role in the digital information landscape. In India the question of search is currently a subject of investigation and more recently a fine by the Competition Commission of India. More recently the question of what search engines can list in their results has come up before the Indian Supreme Court.

Google-Bing-Search-Engine

Google and other search engines have argued that their algorithm’s ranking of search results was an exercise in editorial discretion, available to all speakers as a First Amendment right. This has laid the groundwork for claims of search engines’ rights to freedom of speech. However, in the recent landmark judgment of Shreya Singhal v. Union of India, the Supreme Court had during the oral hearing stated that intermediaries do not have free speech rights.

Against this backdrop, this paper very briefly introduces comparative scholarship around search and the constitutional right to free speech and takes the first steps to making that the argument for the need to regulate important participants such as search engines in the information landscape, and for the need to construct and clarify Article 19(1)(a) frameworks to ensure rights adjudication to such regulation result in balanced outcomes.

The Complete Paper can be found here: https://drive.google.com/file/d/0BwY1OLu_H1ICanlpUmt2dGdqelk/view?usp=sharing

(Ujwala Uppaluri was a Fellow at CCG from June 2014 to April 2015 and will be joining Harvard Law School to pursue her LL.M. from August 2015.)