Report on Intermediary Liability in India

The question of when intermediaries are liable, or conversely not liable, for content they host or transmit is often at the heart of regulating content on the internet. This is especially true in India, where the Government has relied almost exclusively on intermediary liability to regulate online content. With the advent of the Intermediary Guidelines 2021, and their subsequent amendment in October 2022, there has been a paradigm shift in the regulation of online intermediaries in India. 

To help understand this new regulatory reality, the Centre for Communication Governance (CCG) is releasing its ‘Report on Intermediary Liability in India’ (December 2022).

This report aims to provide a comprehensive overview of the regulation of online intermediaries and their obligations with respect to unlawful content. It updates and expands on the Centre for Communication Governance’s 2015 report documenting the liability of online intermediaries to now cover the decisions in Shreya Singhal vs. Union of India and Myspace vs. Super Cassettes Industries Ltd, the Intermediary Guidelines 2021 (including the October 2022 Amendment), the E-Commerce Rules, and the IT Blocking Rules. It captures the over two decades of regulatory and judicial practice on the issue of intermediary liability since the adoption of the IT Act. The report aims to provide practitioners, lawmakers and regulators, judges, and academics with valuable insights as they embark on shaping the coming decades of intermediary liability in India.

Some key insights that emerge from the report are summarised below:

Limitations of Section 79 (‘Safe Harbour’) Approach: In the cases analysed in this report, there is little judicial consistency in the application of secondarily liability principles to intermediaries, including the obligations set out in Intermediary Guidelines 2021, and monetary damages for transmitting or hosting unlawful content are almost never imposed on intermediaries. This suggests that there are significant limitations to the regulatory impact of obligations imposed on intermediaries as pre-conditions to safe harbour.

Need for clarity on content moderation and curation: The text of Section 79(2) of the IT Act grants intermediaries safe harbour provided they act as mere conduits, not interfering with the transmission of content. There exists ambiguity over whether content moderation and curation activities would cause intermediaries to violate Section 79(2) and lose safe harbour. The Intermediary Guidelines 2021 have partially remedied this ambiguity by expressly stating that voluntary content moderation will not result in an intermediary ‘interfering’ with the transmission under Section 79(2). However, ultimately amendments to the IT Act are required to provide regulatory certainty.

Intermediary status and immunity on a case-by-case basis: An entity’s classification as an intermediary is not a status that applies across all its operations (like a ‘company’ or a ‘partnership’), but rather the function it is performing vis-à-vis the specific electronic content it is sued in connection with. Courts should determine whether an entity is an ‘intermediary’ and whether it complied with the conditions of Section 79 in relation to the content it is being sued for. Consistently making this determination at a preliminary stage of litigation would greatly further the efficacy of Section 79’s safe harbour approach.

Concerns over GACs: While the October 2022 Amendment stipulates that two members of every GAC shall be independent, no detail is provided as to how such independence shall be secured (e.g., security of tenure and salary, oath of office, minimum judicial qualifications etc.). Such independence is vital as GAC members are appointed by the Union Government but the Union Government or its functionaries or instrumentalities may also be parties before a GAC. Further, given that the GACs are authorities ‘under the control of the Government of India’, they have an obligation to abide by the principles of natural justice, due process, and comply with the Fundamental Rights set out in the Constitution. If a GAC directs the removal of content beyond the scope of Article 19(2) of the Constitution, questions of an impermissible restriction on free expression may be raised.

Actual knowledge in 2022: The October 2022 Amendment requires intermediaries to make reasonable efforts to “cause” their users not to upload certain categories of content and ‘act on’ user complaints against content within seventy-two hours. Requiring intermediaries to remove content at the risk of losing safe harbour in circumstances other than the receipt of a court or government order prima facie violates the decision of Shreya Singhal. Further, India’s approach to notice and takedown continues to lack a system for reinstatement of content.  

Uncertainty over government blocking power: Section 69A of the IT Act expressly grants the Union Government power to block content, subject to a hearing by the originator (uploader) or intermediary. However, Section 79(3)(b) of the IT Act may also be utilised to require intermediaries to take down content absent some of the safeguards provided in Section 69A. The fact that the Government has relied on both provisions in the past and that it does not voluntarily disclose blocking orders makes a robust legal analysis of the blocking power challenging.

Hearing originators when blocking: The decision in Shreya Singhal and the requirements of due process support the understanding that the originator must be notified and granted a hearing under the IT Blocking Rules prior to their content being restricted under Section 69A. However, evidence suggests that the government regularly does not provide originators with hearings, even where the originator is known to the government. Instead, the government directly communicates with intermediaries away from the public eye, raising rule of law concerns.

Issues with first originators: Both the methods proposed for ‘tracing first originators’ (hashing unique messages and affixing encrypted originator information) are easily circumvented, require significant technical changes to the architecture of messaging services, offer limited investigatory or evidentiary value, and will likely undermine the privacy and security of all users to catch a few bad actors. Given these considerations, it is unlikely that such a measure would satisfy the proportionality test laid out by current Supreme Court doctrine.

Broad and inconsistent injunctions: An analysis of injunctions against online content reveals that the contents of court orders are often sweeping, imposing vague compliance burdens on intermediaries. When issuing injunctions against online content, courts should limit blocking or removals to specific URLs. Further courts should be cognisant of the fact that intermediaries have themselves not committed any wrongdoing, and the effect of an injunction should be seen as meaningfully dissuading users from accessing content rather than an absolute prohibition.

This report was made possible by the generous support we received from National Law University Delhi. CCG would like to thank our Faculty Advisor Dr. Daniel Mathew for his continuous direction and mentorship. This report would not be possible without the support provided by the Friedrich Naumann Foundation for Freedom, South Asia. We are grateful for comments received from the Data Governance Network and its reviewers. CCG would also like to thank Faiza Rahman and Shashank Mohan for their review and comments, and Jhalak M. Kakkar and Smitha Krishna Prasad for facilitating the report. We thank Oshika Nayak of National Law University Delhi for providing invaluable research assistance for this report. Lastly, we would also like to thank all members of CCG for the many ways in which they supported the report, in particular, the ever-present and ever-patient Suman Negi and Preeti Bhandari for the unending support for all the work we do.

Delhi High Court Refuses to make Group Administrators Liable for Content posted by Other Members

In April 2016, two directives issued by two separate state governments in India made social media group administrators (‘administrators’) liable for content circulated by other members of the group. This came in the wake of a series of arrests in India for content posted on WhatsApp. This included arrests of administrators for content posted by other members. In our previous post, we argued that making administrators liable is not legal and severely undermines their right to freedom of speech and expression.

This question surrounding the liability of administrators for content posted by others recently came up before for consideration before High Court of Delhi. In a recent order, the Court recognised the problem of placing this burden on administrators.

In this case, damages for defamation were also sought from the administrator of a Telegram and a Google Group on which the allegedly defamatory statements were published. Recognising the inability of the administrator to influence content on the group, the Court found holding an administrator liable equivalent to holding the ‘manufacturer of the newsprint’ liable for the defamatory statements in the newspaper.

The Court reasoned that at the time of making the group, the administrators could not expect members to make defamatory statements. Further, the Court took into account the fact that the statements posted did not require the administrator’s approval. Consequently, the Court found no reason to hold the administrator responsible.

However, the contention of the petitioner that the administrator has the power to ‘add or remove people from the group/platform as well as to filter’ was not evaluated on merits, as it was not the pleaded case of the petitioner. The courts response to such arguments remains to be seen.

In the midst of increasing restrictions on social media groups and administrators, this order is a welcome step. It is imperative that Governments, law enforcement agencies and courts take note to ensure that freedom of expression of administrators and users of such platforms/groups is not undermined.

Ministry of Road Transport Issues Advisory on Taxi Aggregator Apps

Written by Siddharth Manohar

The Ministry of Road Transport and Highways’ recent advisory contains detailed guidelines for internet-based taxi aggregators (also known as ride-hailing apps) and their continued operation in India. An interim order by the Delhi High Court in July had banned the operation of such services in Delhi, and this advisory has been welcomed by the companies operating in the market.

Road transport remains a subject under List II of the Constitution, which means that only State governments (as opposed to the Union Government) are allowed to enforce policy regarding any matter falling under it. As highlighted in the statement of the Secretary of the Ministry, Vijay Chhibber, the object of the advisory is to regulate taxi services offered by app-based providers such as Uber and Ola as effectively as regular taxi companies. The advisory, he said, would “clear the air for states to form their own rules, treating them at par with other cab fleet owners”. Being a mere advisory, it has no direct penal consequences even if the service providers do not comply with the guidelines. However, it does act as a model for states to develop their own regulations, along with the power to punish for non-compliance.

The guidelines go some way towards solving certain headaches caused by the application of the Radio Taxi Scheme, 2006 to ride-hailing internet applications. The plea of taxi aggregator companies at the time of the ban had been that their service was essentially different from a regular taxi company and they could not be shoehorned into the older regulatory scheme designed for conventional taxi companies. The drivers, for instance, are not direct employees of the taxi aggregator company. Therefore, the rule in the Radio Taxi Scheme that holds the employer of the driver responsible for the actions and safety of the driver would not be applicable to taxi aggregators.

 To deal with this, the new guidelines specify the required background checks and registration documents to be acquired by the company upon registration of the driver with the service. It also requires the operators to carry out a training program for its registered drivers. This takes into account the difference in operation of these companies but nonetheless sets down the same regulatory standards for the operation of the taxi service.

 The guidelines take the useful step of disallowing drivers registered with the service or vehicles used by them to advertise themselves as a regular taxi vehicle. The document also puts restrictions on the companies in terms of mandating any kind of minimum hours of driving to be registered with the service, and also mandates following rules of maximum number of driving hours, to ensure driving safety. It also prevents companies from restricting drivers registered with the company from registering with other taxi aggregator services. These measure put greater bargaining power in the hands of the drivers registered with the service and avoids anti-competitive practices like preventing a driver from opting for a different service which offers better incentives for completing a certain number of rides on a given day.

 The advisory also contains a host of requirements geared towards passenger safety. Firstly, the comprehensive rules regarding registration and permit documents for the driver and vehicle play a role in boosting accountability. The steps of registration include a police verification of drivers who are to be involved with the service. It disallows the company from registering persons who have been convicted during the past seven years for driving under the influence of alcohol or drugs, or for any cognizable offence, including sexual offences, terror-related offences, and property offences. The guidelines include requirements of detailed information regarding the driver and vehicle to be provided on the platform to users of the service, and for this data along with the location data to be transferred to two trusted contacts, apart from their being transferred to the authorities when required. For this purpose, all vehicles are to be fitted with location tracking technology, apart from standard first aid and safety equipment as prescribed in relevant laws.Further, in case of a complaint of discriminatory practices filed against any driver, it suggests that the driver be suspended from accessing the service until such time as internal investigations regarding the matter are still ongoing.

 The advisory also takes up some useful administrative requirements. It demands that the service operator incorporates an legal entity within India. It makes mandatory an Office address ineach state where the company operates, along with an assigned in-charge, for easier service of notice for purposes such as Court proceedings. It also requires a comprehensive list of all drivers and their associated vehicles and details to be submitted to the Licensing Authority on a monthly basis It mandates that the company must provide for a 24*7 helpline along with a call centre for the same, as well as a web-based portal, for grievances of customers to be communicated.

 Dealing with these different aspects of regulation of taxi-aggregators, the advisory has struck some kind of middle ground between those that complain that these services are bypassing accountability measures and ignoring customer safety, and the companies who argue that while customer safety and quality of service remain important, existing rules of commercial road transport services cannot be applied to them as is. One hopes that the State governments now take up the clear regulatory call sent out by the Ministry and set about framing legislations which enforce the provisions laid down in the advisory document. The Karnataka State department has been reported to come out with the relevant policy soon, and, if so, will be the first instance of authoritative regulation on the matter as a thought-out response.

Delfi v Estonia: Dangers of Collateral Censorship

By Vasudev Devadasan

The issue of whether web platforms, such as Facebook, Google or Reddit are liable for third party content posted by users and other third parties is one of the most significant issues confronting the freedom of expression in the digital era. This is especially delicate when users post content that might constitute hate speech, or involve other privacy and copyright violations. When should an intermediary be held liable for such content by its users? Most countries recognise the power of the Internet to contribute to public debate and offer various degrees of ‘safe harbour’ to intermediaries, exempting them from liability for user generated content. In the United States this is recognised by Section 230 of the Communications Decency Act and Section 512 of the Digital Millennium Copyright Act which provide almost blanket immunity to intermediaries. In Europe, Article 14 and 15 of the Electronic Commerce Directive protect intermediaries operating as mere technical conduits from liability up till the point of ‘actual knowledge’ of the offence content. Under Article 14 (2) of the Directive, an intermediary is required to expeditiously take down offensive content once it is aware of it. Yet despite these guidelines, Europe’s highest court of appeal for human rights, the Grand Chamber, upheld the imposition of liability on the online news portal Delfi, for anonymous comments on its website.

Delfi’s Methods and the Facts

Delfi is one of the largest news portals on the Internet in Estonia. On 24 January 2006 Delfi published an article titled, “SLK Destroyed Planned Ice Read.” In less than 24 hours the article attracted 185 comments with over 20 containing personal threats and offensive language against the SLK’s sole and majority shareholder at the time. His lawyers requested Delfi to take down the offensive comments, which they did immediately, and sued for half a million Estonian Kroons. 

Delfi’s status as an online news portal, the third party nature of the comments and the immediate removal of the comments would appear to have exempted it from liability. In addition Delfi had a system that automatically screen for offence words that incited hatred and any reader could mark a comment as insulting, following which it would be removed expeditiously.

The Original Judgement of the First Section

The role of any chamber of the European Court of Human Rights, as is constantly reiterated through its judgements, is not to examine domestic or European law and interpret accordingly, but rather to ascertain whether the effects of the judgement of national courts are compatible with the European Convention of Human Rights. Thus, when examining any restriction of free speech, the court will analyse the situation on a case by case basis and determine whether the restriction is (i) provided by law, (ii) pursues a legitimate aim, and (iii) is necessary in a democratic society.

For a restriction to be provided by law, the law restricting freedom of expression must be reasonably foreseeable. When examining Delfi’s claim, the original Chamber found that the as a professional publisher, Delfi should have been familiar with the domestic legislation, and, if they had sought legal counsel, it would have been reasonably foreseeable that they, as an intermediary under Estonian laws, would have been held liable. Given that the defamatory nature of the comments was not disputed by either party, the legitimate aim was naturally the protection of the rights and reputation of others. When determining whether the said restriction was necessary in democratic society, the chamber gave great weight to the fact that Delfi was the only party who could control the comments once they were posted. The original chamber also thought it disproportionate to put the onus of identifying the original authors of the comments on the aggrieved party, given the technical difficulties involved of locating people on the internet. In light of this, and the negligible and civil nature of the fine imposed on Delfi, the original chamber ruled that the restriction on free speech was in fact proportional and there was no violation of Article 10.

The Decision of the Grand Chamber

The outcome of the Grand Chamber judgement was contingent on two broad principles, firstly, they agreed with the original Chamber judgement in characterising Delfi as possessing the level of responsibility usually attributed to publishers due to the control they exercised over the comments in question. But the second line of reasoning was unique to the Grand Chamber, and raises some troubling implications for intermediary liability. The Grand Chamber imposed a higher burden imposed on Delfi due to the ‘clearly unlawful’ nature of the comments.

The court notes with some delicacy that there are differing legal principles that apply to an Internet based media operation as opposed to traditional print media, and even calls for a “graduated and differentiated” application of liability. However, ultimately it follows in the footsteps of the original Chamber judgement and uses two sets of facts to hold that an online portal can be held, in principal, liable for the third party comments on its site. Firstly, it gives great weight to Delfi’s economic interest, and notes that that due to advertising on the site, Delfi had an adverse interest in generating comments. Secondly, it again points to the fact that once a comment was posted, even a user could not remove it, Delfi was the only one capable of doing this. The court uses this to point to the level of control that existed over the comments. This added responsibility on the portal due to the existence of an economic interest does not take into account the costs of the website in generating public debate and as is eloquently noted in the dissenting opinion of Judges Sajo and Tsotsoria, “information is not cheap.” Additionally, a commenter on Delfi’s site would have neither been an employee of the publisher, and in all probability not even known to the publisher. Additionally, with the large number of comments flowing into websites, the control that both the Chamber and Grand Chamber concludes Delfi to have, presupposes a level of knowledge that in most cases intermediaries do not have.

The second, and novel, aspect of the Grand Chamber’s judgement is that it imposes a higher burden on intermediaries in cases where third party content is ‘clearly unlawful’. The court holds the comments to be manifestly unlawful and argues that some comments are incompatible with the values of the European Convention of Human Rights itself, and thus in violation of Article 17. Previously, the type of speech condemned under Article 17 have been holocaust denial, justification of pro-Nazi policies and portraying Jews as the source of evil in Russia. However, throughout the judgement the court uses the legitimate aim of ‘the reputation and rights of others’ to justify the restriction on free speech. This is inconsistent with the analysis provided by the court itself in the past, because speech that is hit by Article 17, is deemed to have no value and acquires no protection under Article 10, thus rendering meaningless the three part test and the proportionality analysis provided in the judgement. Thus without a detailed analysis of the comments on Delfi’s site, the Grand Chamber has basically imposed what amounts to strict liability on intermediaries for these ‘clearly unlawful’ comments, without clearly delineating what aspects of the comments meet this standard. At heart, Delfi originally began with a defamation claim, and nine years later the Grand Chamber’s re-characterisation of the comments as hate speech leaves much ambiguity as to whether defamatory material, or what other content, would also result in an online portal being held liable for such content merely existing on their website without their knowledge.

Perhaps the dissenting opinion in the judgement best highlights the dangers that this could result it for the future of intermediary liability. The latest judgement by the Grand Chamber requires intermediaries to remove ‘clearly unlawful’ content as soon as it is published, when they may not even have actual knowledge of its existence. The only way to do this is to ensure round the clock supervision. This may even result in smaller online sites who cannot afford such supervision to discontinue comments on their sites. Even larger sites, naturally wary of legal disputes, are likely to implement overbroad forms of censorship of third party content on their sites, leading to collateral censorship. In 2011 the Human Rights Council’s Special Rapporteur stated that “no one should be held liable for content on the Internet of which they are not the author, and no State should use or force intermediaries to undertake censorship on its behalf.” The imposition of liability on Delfi has certainly created a significant amount of uncertainty regarding the principles of intermediary liability and only time will tell whether it is the exceptional circumstances of this case that merited such a result, or this signals a departure from the ‘actual knowledge’ and safe harbour standards that intermediaries currently enjoy.

Some other analysis of the Judgment are available below:

Delfi AS v. Estonia: Grand Chamber confirms liability of online news portal for offensive comments posted by its readers (Strasbourg Observers)

Delfi v. Estonia: Privacy Protection and Chilling Effect (Verfassungs Blog)

European Court of Human Rights Undermines Essential Protections for Free Expression Online (Centre for Democracy & Technology)

The Delfi AS vs Estonia judgement explained (LSE Media Policy Blog)

European Court strikes serious blow to free speech online (Article 19)

Delfi AS v. Estonia: a blow to free expression online (Access)

(Vasudev Devadasan, an intern at CCG is a third year student at Jindal Global Law School and a former Price Moot alumnus where he worked on the issues of hate speech and intermediary liability)

One Man’s Pornography is Another Man’s High Art: Internet Service Providers tell Supreme Court in the Porn Petition

Would photographs of Khajuraho be termed as porn?

Would photographs of Khajuraho be termed as porn?

On 27th January, 2014 the Supreme Court heard the petition filed by Kamlesh Vaswani requesting the court to pass an order to block websites with pornographic content in the country. The petition seeks among other things, to make viewing pornography a non-bailable and a cognizable offence. (Here and here are television discussion on the PIL, featuring the petitioner Mr. Kamlesh Vaswani) Currently, while the creation and distribution of porn are criminalised in India, consumption is not.

Of the 5 respondents only the Internet Service Providers Association of India (ISPAI) has filed a reply. The Supreme Court has therefore asked the other respondents (Union of India, Ministry of Communications & Information Technology,  Ministry of Information & Broadcasting and the Department of Telecom) to file their within 3 weeks. The bench of Justices BS Chauhan and J Chelameswar has also asked the respondents for a specific response addressing child pornography over the internet in their reply. During the hearing the lawyer appearing for the Union of India informed the court that they have requested for composition of a technical committee, to provide them with guidance on the issue. In a previous hearing the government had informed the court that it would find it difficult to block international porn sites viewable in India.

ISPAI has stated in its reply that there is “no unanimously accepted definition of pornography and the boundaries of the same are amorphous.” ISPAI’s response highlights the problem and states that “one man’s pornography is another man’s high art”, using AIDS awareness websites and photographs of Khajuraho as examples of how explicit material is not always pornography. ISPAI has stated in its reply that “ISPs neither create content of any sort, nor do they own, promote, modify or edit it. They are mere authorized service providers who provide customers access to internet. ISPs are mere conduits and they cannot be made liable for the contents they do not own. It would be akin to making liable telecom companies for conversations people have on their network”.

ISPAI has argued in its written response that ISP’s ought not block without specific orders from the courts or government because “such blocking would tantamount to pre-censorship of contents without authority of law and could unfairly limit the fundamental rights of the customers and may expose them to liability under civil laws”.

Advocates Rahul Narayan and Shivain Vaidalingam appeared for ISPAI in the matter and the court has fixed 10th March 2014 as the next  date of hearing.

(Sarvjeet is a Project Manager and Research Fellow at the Centre for Communication Governance at National Law University Delhi)