When the Empire SLAPPs Back

“Short of a gun to the head, a greater threat to First Amendment expression can scarcely be imagined”

-Nicholas Colabella J. of the New York Supreme Court, in Gordon v Marrone.

The above statement vividly describes what has come to be called a SLAPP suit – Strategic Lawsuit Against Public Participation. The term was coined by University of Denver Professors Penelope Canan and George Pring in their book ‘SLAPPs: Getting Sued for Speaking Out’.[1] SLAPPs are generally characterized by deep-pocketed individuals or entities pursuing litigation as a way of intimidating or silencing their critics.

The suit likely may have no merit, but the objective is primarily to threaten or coerce critics into silence, or in the alternative, impose prohibitive costs on criticism. SLAPPs also have the effect of suppressing reportage about initial claims.  Even if defendants win a lawsuit on merits, it would be at an immense cost in terms of resources. This experience is likely to deter them, and others from speaking out in the future. Faced with an uncertain legal process, defendants are also likely to seek settlement. While this allows them to avoid an expensive process, it usually entails them having to abandon their opposition as well.  By in effect chilling citizen participation in government, SLAPP suits strike at the heart of participatory democracy.

SLAPPs have also come to be employed in India, in a number of instances. These are usually large corporates, powerful individuals, and even private universities, dragging media houses and journalists, or academics to Court for unfavorable reportage. Recent instances indicate that SLAPPs can also be employed by influential people accused of sexual assault or harassment. The aim appears to be to suppress media coverage, and deter victims from publically speaking out.

Defamation suits tend to be the weapon of choice for SLAPPs. In India, where defamation can also be a criminal offence, this can be a particularly effective strategy, especially since it may be pursued concurrently with a civil claim. Another tactic to make the process more punitive, is to file the suit in a remote, inconvenient location where the offending publication may have been made available. In the context of the internet, this could theoretically be anywhere.

There have not been many instances where the judiciary have demonstrated awareness of this phenomenon. In Crop Care Federation of India v. Rajasthan Patrika, reports had been published in the Rajasthan Patrika about the harmful effects of pesticides. Crop Care Federation of India, an industry body of pesticide manufactures, sued the newspaper and its employees for allegedly defaming its members. In response, the defendant filed an application for the rejection of plaint, under Order 7 Rule 11 of the Code of Civil Procedure, 1908. It was argued that the plaintiff was an association of manufacturers, and not a determinate body, which was a necessary requirement to constitute a cause of action in a defamation suit. Justice Ravindra Bhat dismissed the suit on the above ground but also explicitly called out the petitioner’s suit as a SLAPP, with a reference to Justice Nicholas Colabella’s dictum in Gordon v. Marrone. He went on to note that, “in such instances the plaintiff’s goals are accomplished if the defendant succumbs to fear, intimidation, mounting legal costs or simple exhaustion and abandons the criticism. A SLAPP may also intimidate others from participating in the debate.”

Several jurisdictions have enacted ‘anti-SLAPP’ legislations in an attempt to protect defendants from such practices. Broadly, such legislations provide the defendant an opportunity to seek dismissal of the suit early in the proceedings. In most anti-SLAPP statutes in the United States, if the defendant demonstrates that the statements were within the exercise of free speech, and on matters of legitimate public interest, the burden shifts onto the plaintiff to establish a probability of success of their claims. Failing to do so would lead to a dismissal, with the petitioner having to compensate the defendant’s legal costs. Typically, the discovery process is halted while the motion is being adjudicated upon. This further mitigates the financial toll that the proceedings might otherwise take.

In a similar vein, one of the recommendations in India has been to introduce procedure into Order 7 Rule 11 that allows suits that bear the mark of a SLAPP to be summarily dismissed. Broader reforms to the law of defamation may also limit the impact of SLAPPs. It has been proposed that Sections 499 and 500 of the Indian Penal Code, 1860, which criminalize defamation, should be repealed. It is widely held that, despite the Supreme Court’s contrary view, the imposition of penal consequences for defamation runs counter to the free speech ideals enshrined within our Constitution. There are also suggestions to codify civil defamation, with higher thresholds for statements regarding public officials or public figures, as well as a stricter requirement of demonstrating harm. There are also proposals to allow for corrections and apologies to be offered as remedy, and for damages designed to be primarily restorative, and not punitive.

According to Pring and Canan, SLAPPs are a way for petitioners to transform a “a public, political controversy into a private, legalistic one.”[2] Defamation, and SLAPP suits in general, have become a tool to deter public scrutiny and criticism of those in power. Drawing reasonable inferences from fact is essential to the functioning of the press, and the internet has provided citizens an avenue to express their opinions and grievances. Both are likely to limit the legitimate exercise of their free speech if they run the risk of being dragged to court to mount a legal defense for their claims. Our legal framework seeks to deliver justice to all, but must also be cognizant of how it may be subverted towards nefarious ends.

[1] Penelope Canan and George Pring, SLAPPs : Getting Sued for Speaking Out (Temple University Press, 1996).

[2] Id., at 10.

Advertisements

An update on Sabu Mathew George vs. Union of India

Today, the Supreme Court heard the ongoing matter of Sabu Mathew George vs. Union of India. In 2008, a petition was filed to ban advertisements endorsing sex-selective abortions from search engine results. Advertisements endorsing sex selective abortions are illegal under Section 22 of the PNDT Act (The Pre-conception and Pre-natal Diagnostic Techniques Act), 1994 Act. Several orders have been passed over the last few years, the last of which was passed on April 13th, 2017. Following from these orders, the Court had directed the Centre to set up a nodal agency where complaints against sex selective ads could be lodged. The Court had also ordered the search engines involved to set up an in-house expert committee in this regard. The order dated April 13th stated that compliance with the mechanism in place would be checked hereinafter. Our blog posts covering these arguments and other issues relevant to search neutrality can be found here and here.

Today, the petitioners counsel stated that the nodal agency in question should be able to take suo moto cognisance of complaints, and not just restrict its functioning to the method prescribed previously. Currently, individuals can file complaints with the nodal agency, which will then be forwarded to the search engine in question. The relevant part from the order (16/11/16) is as follows:

“…we direct that the Union of India shall constitute a “Nodal Agency” and give due advertisement in television, newspapers and radio by stating that it has been created in pursuance of the order of this Court and anyone who comes across anything that has the nature of an advertisement or any impact in identifying a boy or a girl in any method, manner or mode by any search engine shall be brought to its notice. Once it is brought to the notice of the Nodal Agency, it shall intimate the concerned search engine or the corridor provider immediately and after receipt of the same, the search engines are obliged to delete it within thirty-six hours and intimate the Nodal Agency. Needless to say, this is an interim arrangement pending the discussion which we have noted herein-before…”

On the respondent’s side, the counsel stated that over the last few months, Microsoft had only received one complaint and Yahoo had not received any complaints, arguing that the nodal agency  would not have to take on a higher level of regulation. Further on the issue of suo moto cognisance, they stated that it would be untenable to expect a government agency to ‘tap’ into search results. As per the counsel, the last order had only contemplated checking with the compliance of the nodal agency system, and with constituting an expert committee, all of which had been established.

The petitioners stated that they would need more time and would suggest other measures for effective regulation.

The next hearing will take place on the 24th of November, 2017.

Understanding the ‘NetzDG’: Privatised censorship under Germany’s new hate speech law

By William James Hargreaves

The Network Enforcement Act

The Network Enforcement Act (NetzDG), a law passed on the 30th of June by the German Government operates to fine social media companies up to 50 million Euros – approximately 360 crore rupees – if they persistently fail to remove hate speech from their platform within 24 hours of the content being posted. Companies will have up to one week where the illegality of the content is debatable.

NetzDG is intended to hold social media companies financially liable for the opinions posited using their platform. The Act will effectively subject social media platforms to the stricter content standards demanded of traditional media broadcasters.

Why was the act introduced?

Germany is one the world’s strictest regulators of hate speech. The State’s Criminal Code covers issues of defamation, public threats of violence and incitement to illegal conduct, and provides for incarceration for Holocaust denial or inciting hatred against minorities. Germany is a country sensitive to the persuasive power of oratory in radicalizing opinion. The parameters of these sensitivities are being tested as the influx of more than one million asylum seekers and migrants has catalyzed a notably belligerent public discourse.

In response to the changing discourse, Facebook and a number of other social media platforms consented in December 2015 to the terms of a code of conduct drafted by the Merkel Government. The code of conduct was intended to ensure that platforms adhered to Germany’s domestic law when regulating user content. However, a study monitoring Facebook’s compliance found the company deleted or blocked only 39 percent of reported content, a rate that put Facebook in breach of the agreement.

NetzDG turns the voluntary agreement into a binding legal obligation, making Facebook liable for any future failure to adhere to it’s terms.

In a statement made following the law’s enactment, German Justice Minister Heiko Maas declared ‘With this law, we put an end to the verbal law of the jungle on the Internet and protect the freedom of expression for all… This is not a limitation, but a prerequisite for freedom of expression’. The premise of the position of Minister Maas, and the starting point for the principles that validate the illegality of hate speech, is that verbal radicalization is often time the precursor to physical violence.

As the world’s predominant social media platform, Facebook has curated unprecedented, and in some respects, unconditioned access to people and their opinions. With consideration for the extent of Facebook’s access, this post will focus on the possible effects of the NetzDG on Facebook and it’s users.

Facebook’s predicament

  • Regulatory methods

How Facebook intends to observe the NetzDG is unclear. The social media platform, whose users now constitute one-quarter of the world’s population, has previously been unwilling to disclose the details of their internal censorship processes. However given the potential financial exposure, and the sustained increase in user content, Facebook must, to some extent, increase their capacity to evaluate and regulate reported content. In response, Facebook announced in May that it would nearly double the number of employees tasked with removing content that violated their guidelines. Whether this increase in capacity will be sufficient will be determined in time.

However, and regardless of the move’s effectiveness, Facebook’s near doubling of capacity implies that human interpretation is the final authority, and that implication raises a number of questions: To what extent can manual censorship keep up with the consistent increase in content? Can the same processes maintain efficacy in a climate where hate speech is increasingly prevalent in public discourse? If automated censorship is necessary, who decides the algorithm’s parameters and how sensitive might those parameters be to the nuances of expression and interpretation? In passing the NetzDG, the German Government has relinquished the State’s authority to fully decide the answer to these questions. The jurisdiction of the State in matters of communication regulation has, to a certain extent, been privatised.

  • Censorship standards

Recently, an investigative journalism platform called ProPublica claimed possession of documents purported to be internal censorship guidelines used at Facebook. The unverified guidelines instructed employees to remove the phrase ‘migrants are filth’ but permit ‘migrants are filthy’. Whether the documents are legitimate is to some extent irrelevant: the documents provide a useful example of the specificity required where the aim is to guide one person’s interpretation of language toward a specific end – in this instance toward a correct judgment of legality or illegality.

Regardless of the degree of specificity, it is impossible for any formulation of guidelines to cover every possible manifestation of hate speech. Thereby interpreting reported content will necessarily require some degree of discretion. This necessity begs the question: to what extent will affording private entities discretionary powers of censorship impede freedoms of communication? Particularly where the discretion afforded is conditioned by financial risk and a determination is required within a 24-hour period.

  • Facebook’s position

Statements made by Facebook prior to the legislation’s enactment expressed concern for the effect the Act will have on the already complex issue of content moderation. ‘The draft law provides an incentive to delete content that is not clearly illegal when social networks face such a disproportionate threat of fine’ a statement noted. ‘(The Act) would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies’. Facebook’s reservation is telling: the company’s reluctance to adopt the role of moderator to the extent required alludes to the potential consequences of the liability imposed by the Act. 

The problem with imposing this form of liability

 Any decision made by a social media platform to censor user content will be supported by the anti-discrimination principles prescribed by the NetzDG. However, where the motivation behind discretionary decision-making shifts away from social utility towards financial management the guiding considerations become efficiency and risk minimisation. Efficiency and risk minimisation in this instance requires Facebook to either (i) increase capacity, which in turn results in an increased financial burden, or (ii) adopt guidelines that minimise exposure.

Seemingly the approach adopted by Facebook is to increase capacity. However, Facebook’s concerns that the Act creates financial incentives to adopt guidelines that minimise exposure are significant. Such concerns demonstrate an understanding that requiring profit motivated companies to do the work of the State within a 24-hour time frame will necessarily require a different set of parameters than those imposed on the regulation of oral hate speech. If Facebook, in drafting and applying those parameters, decides to err on the side of caution and, in some instances, censor otherwise legal content, that decision will have directly infringed the freedom of communication enjoyed by German citizens.

A democracy must be able to accommodate contrasting opinions if it purports to respect rights of communication and expression. Conversely, limitations on rights enjoyed may be justified if they benefit the majority. The NetzDG is Germany’s recognition that the nature of online communication – the speed at which ideas promulgate and proliferate, and the disconnect between comment and consequence created by online anonymity – require the existing limitations on the freedom of communication be adapted. Whether instances of infringement, are warranted in the current climate is a difficult and complicated extension of the debate between the utility of regulating hate speech and the corresponding consequences for the freedoms of communication and expression. The decision to pass the NetzDG suggests the German Government considers the risk of infringement is acceptable when measured against the consequences of unfettered hate speech.

Public recognition that NetzDG poses a risk is important. It is best practice that within a democracy, any new limit to liberty, oral or otherwise, be questioned and a justification given. Here the justification seems well-founded. However the answers to the questions posed by sceptics may prove telling as Germany positions itself at the forefront of the debate over online censorship.

(William is a student at the University of Melbourne and is currently interning at CCG)

How (not) to get away with murder: Reviewing Facebook’s live streaming guidelines

Introduction

The recent shooting in Cleveland live streamed on Facebook has brought the social media company’s regulatory responsibilities into question. Since the launch of Facebook Live in 2016, the service’s role in raising political awareness has been acknowledged. However, the service has also been used to broadcast several instances of graphic violence.

The streaming of violent content (including instances of suicide, murders and gang rapes) has raised serious questions about Facebook’s responsibility as an intermediary. While it is not technically feasible for Facebook to review all live videos while they’re being streamed or filter them before they’re streamed, the platform does have a routine procedure in place to take down such content. This post will visit the guidelines in place to take down live streamed content and discuss alternatives to the existing reporting mechanism.

What guidelines are in place?

Facebook has ‘community standards’ in place.  However, their internal regulation methods are unknown to the public. Live videos have to be in compliance with ‘community standards’, which specifies that Facebook will remove content relating to ‘direct threats’, self-injury’, ‘dangerous organizations’, ‘bullying and harassment’, ‘attacks on public figures’, ‘criminal activity’ and ‘sexual violence and exploitation’.

The company has stated that it ‘only takes one report for something to be reviewed’.  This system of review has been criticized since graphic content could go unnoticed without a report. In addition, this form of reporting would be unsuccessful since there is no mandate of ‘compulsory reporting’ for the viewers.  Incidentally, the Cleveland shooting video was not detected by Facebook until it was flagged as ‘offensive’, which was a couple of hours after the incident. The company has also stated that they are working on developing ‘artificial intelligence’ that could help put an end to these broadcasts. However, they currently rely on the reporting mechanism, where ‘thousands of people around the world’ review posts that have been reported against. The reviewers check if the content goes against the ‘community standards’ and ‘prioritize videos with serious safety implications’.

While deciding if a video should be taken down, the reviewers will also take the ‘context and degree’ of the content into consideration. For instance, content that is aimed at ‘raising awareness’, even if it displays violence, will be allowed. However, content that is celebrating such violence would be taken down. To demonstrate, when a live video of civilian Philando Castile being shot by a police officer in Minnesota went viral, Facebook kept the video up on their platform, stating that it did not glorify the violent act.

 Regulation

Other than the internal guidelines by which Facebook regulates itself, there haven’t been instances of government regulators, like the United States’ Federal Communications Commission intervening. Unlike the realm of television, where the FCC regulates content and deems material ‘inappropriate’, social media websites are protected from content regulation.

This brings up the question of intermediary liability and Facebook’s liability for hosting graphic content. Under American Law, there is a distinction between ‘publishers’ and ‘common carriers’. A common carrier only ‘enables communications’ and does not ‘publish content’. If a platform edits content, it is most likely a publisher. A ‘publisher’ has a higher level of responsibility for content hosted on their platform, unlike a ‘carrier’. In most instances, social media companies are covered under Section 230 of the Communications Decency Act, a safe harbor provision, by which they would not be held liable for third-party content.  However, questions have been raised about Facebook’s role as a ‘publisher’ or ‘common carrier’, and there seems to be no conclusive answer.

Conclusion

Several experts have considered possible solutions to this growing problem. Some believe that such features should be limited to certain partners and should be opened up to the public once additional safeguards and better artificial intelligence technologies are in place. In these precarious situations, enforcing stricter laws on intermediaries might not resolve the issue at hand. Some jurisdictions have ‘mandatory reporting’ provisions, specifically for crimes of sexual assault. In India, under Section 19 of the Protection of Children from Sexual Offences Act, 2012 ‘any person who has apprehension that an offence…is likely to be committed or has knowledge that such an offence has been committed’ has to report such an offence. In the context of cyber-crimes, this system of ‘mandatory reporting’ would shift the onus on the viewers and supplement the existing reporting system. Mandatory provisions of this nature do not exist in the United States where most of the larger social media companies are based.

Similarly, possible solutions should focus on strengthening the existing reporting system, rather than holding social media platforms liable.

Reviewing the Law Commission’s latest hate speech recommendations

Introduction

The Law Commission has recently released a report on hate speech laws in India. The Supreme Court in Pravasi Bhalai vs. Union of India  asked the Law Commission to recommend changes to existing hate speech laws, and to “define the term hate speech”. The report discusses the history of hate speech jurisprudence in India and in certain other jurisdictions. In addition, it stresses upon the difficulty of defining hate speech and the lack of a concise definition. In the absence of such a definition, certain ‘identifying criterion’ have been mentioned, to detect instances of hate speech. It also discusses the theories of Jeremy Waldron (the ‘dignity’ principle) and makes a case for protecting the interests of minority communities by regulating speech. In this regard, two new sections for the IPC have been proposed. They are as follows:

(i) Prohibiting incitement to hatred-

“153 C. Whoever on grounds of religion, race, caste or community, sex, gender identity, sexual orientation, place of birth, residence, language, disability or tribe –

(a)  uses gravely threatening words either spoken or written, signs, visible representations within the hearing or sight of a person with the intention to cause, fear or alarm; or

(b)  advocates hatred by words either spoken or written, signs, visible representations, that causes incitement to violence shall be punishable with imprisonment of either description for a term which may extend to two years, and fine up to Rs 5000, or with both.”.

(ii) Causing fear, alarm, or provocation of violence in certain cases.

“505 A. Whoever in public intentionally on grounds of religion, race, caste or community, sex, gender, sexual orientation, place of birth, residence, language, disability or tribe-

uses words, or displays any writing, sign, or other visible representation which is gravely threatening, or derogatory;

(i) within the hearing or sight of a person, causing fear or alarm, or;

(ii) with the intent to provoke the use of unlawful violence,

against that person or another, shall be punished with imprisonment for a term which may extend to one year and/or fine up to Rs 5000, or both”.

The author is of the opinion that these recommended amendments are vague and broadly worded and could lead to a chilling effect and over-censorship. Here are a few reasons why the recommendations might not be compatible with free speech jurisprudence:

  1. Three – part test

Article 10 of the European Convention on Human Rights lays down three requirements that need be fulfilled to ensure that a restriction on free speech is warranted. The Law Commission report also discusses this test; it includes the necessity of a measure being ‘prescribed by law’, the need for a ‘legitimate aim’ and the test of ‘necessity and proportionality’.

Under the ‘prescribed by law’ standard, it is necessary for a restriction on free speech to be ‘clear and not ambiguous’. For instance, a phrase like ‘fear or alarm’ (existing in Section 153A and Section 505) has been criticized for being ‘vague’. Without defining or restricting this term, the public would not be aware of what constitutes ‘fear or alarm’ and would not know how to comply with the law. This standard has also been reiterated in Shreya Singhal vs. Union of India, where it was held that the ambiguously worded Section 66A could be problematic for innocent people since they would not be aware as to “which side of the line they fall” towards.

  1. Expanding scope to online offences?

The newly proposed sections also mention that any ‘gravely threatening words within the hearing or sight of a person’ would be penalized. Presumably, the phrase ‘within the sight or hearing of a person’ broadens the scope of this provision and could allow online speech to come under the ambit of the IPC. This phrase is similar to the wording of Section 5 (1) of the Criminal Justice (Public Order) Act, 1986[1] in the United Kingdom, which penalizes “harassment, alarm or distress”. Even though the section does not explicitly mention that it would cover offences on the internet, it has been presumed to do so.[2]

Similarly, if the intent of the framers of Section 153C is to expand the scope to cover online offences, it might introduce the same issues as the omitted Section 66A of the IT Act did. Section 66A intended to penalize the transmission of information which was ‘menacing’ and also which promoted ‘hatred or ill will’. The over-breadth of the terms in the section led to scrapping it. Another reason for scrapping the section was the lowering of the ‘incitement’ threshold (discussed below). Even though the proposed Section 153C does not provide for as many grounds (hatred, ill will, annoyance, etc.), it does explicitly lower the threshold from ‘incitement’ to ‘fear or alarm’/’discrimination’.

  1. The standard of ‘hate speech’

 The report also advocates for penalizing the ‘fear or alarm’ caused by such speech, since it could potentially have the effect of ‘marginalizing a section of the society’. As mentioned above, it has been explicitly mentioned that the threshold of ‘incitement to violence’ should be lowered and factors like ‘incitement to discrimination’ should also be considered.

The Shreya Singhal judgment drew a distinction between ‘discussion, advocacy and incitement’, stating that a restriction justifiable under Article 19(1) (a) of the Constitution would have to amount to ‘incitement’ and not merely ‘discussion’ or ‘advocacy’. This distinction was drawn so that discussing or advocating ideas which could lead to problems with ‘public order’ or disturbing the ‘security of the state’ could be differentiated from ‘incitement’ which establishes more of a ‘causal connection’.

Similarly, if the words used contribute to causing ‘fear or alarm’, the threshold of ‘incitement’ would be lowered, and constitutionally protected speech could be censored.

Conclusion

Despite the shortcomings mentioned above, the report is positive in a few ways. It draws attention to important contemporary issues affecting minority communities and how speech is often used to mobilize communities against each other. It also relies on Jeremy Waldron’s ‘dignity principle’ to make a case for imposing differing hate speech standards to protect minority communities. In addition, the grounds for discrimination now include ‘tribe’ and ‘sexual orientation’ amongst others.

However, existing case laws, coupled with recent instances of censorship, could make the insertion of these provisions troubling. India’s relationship with free speech is already dire; the Press Freedom Index ranks the country at 133 (out of 180) and the Freedom on the Net Report states that India is ‘partly free’ in this regard. The Law Commission might need to reconsider the recommendations, for the sake of upholding free speech. Pravasi Bhalai called for sanctioning politicians speeches, but the recommendations made by the Law Commission might be far reaching and the effects could be chilling.

 

[1] Section 5- Harassment, alarm or distress.
(1)A person is guilty of an offence if he—
(a)uses threatening or abusive words or behaviour, or disorderly behaviour, or
(b)displays any writing, sign or other visible representation which is threatening or abusive,
within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby.

[2] David Wall, Cybercrime: The Transformation of Crime in the Information Age, Page 123, Polity.

Two Takes on the Right to be Forgotten

Last month saw important developments in the discourse around the right to be forgotten. Two high courts, Gujarat and Karnataka, delivered judgments on separate pleas to have particular judgments either removed from online repositories and search engine results or have personal information redacted from them. The Gujarat High Court dismissed the petition, holding that there was no legal basis to seek removal of a judgment from the Internet. On the other hand, the Karnataka High Court ordered the Court’s Registry to redact the aggrieved person’s name before releasing the order to any entity wanting to publish it. This post examines both judgments to understand the reasoning and legal basis for denying or accepting a claim based on the right to be forgotten.

 Gujarat High Court

According to the facts reproduced in the order, the petitioner in this case had criminal charges filed against him for several offences, including murder, which ultimately resulted in an acquittal. At the appellate stage too, the petitioner’s acquittal was confirmed. The judgment was classified as ‘non reportable’ but nevertheless published on an online portal that reproduces judgments from all superior courts in India. It was also indexed by Google, making it easily accessible. Being distressed about this, the petitioner sought ‘permanent restrain of free public exhibition of the judgement…over the Internet’.

While dismissing the petition, the Court held that it was permissible for third parties to obtain copies of the judgment under the Gujarat High Court Rules 1993, provided their application was accompanied by an affidavit and stated reasons for requiring the judgment. Moreover, it held that publication on a website did not amount to a judgment being reported, as the classification of ‘reportable’ was only relevant from the point of view of law reports. In the Court’s opinion, there was no legal basis to order such removal and the presence of the judgment on the Internet did not violate the petitioner’s rights under Article 21 – from which the right to privacy emanates.

The Court’s dismissal of the argument that a non-reportable judgment is on an equal footing with a reportable judgment is problematic, but hardly surprising. In a 2008 decision, while describing the functions of a law reporter that was a party before it, the Supreme Court observed that “the [law report] publishes all reportable judgments along with non-reportable judgments of the Supreme Court of India” The distinction between reportable and non-reportable judgments was not in issue, but it does call for some introspection on the legal basis and rationale for classification of judgments. In an article on the evolution of law reporting in India, the constitutional expert M.P Jain explains that law reports were created as a response to Indian courts adopting the doctrine of precedent. This is the doctrine that binds lower courts to decisions of the higher courts. Precedent is created when a court lays down a new principle of law or changes or clarifies existing law. Consequently, the decision to make a ruling reportable (ideally) depends on whether it sets a precedent or not. Presumably then, there is a lesser public interest in having access to non-reportable judgments as compared to reportable ones.

While there is a clear distinction between publication in a law report and publication of the transcript of the judgment, the lack of a public interest element could have been taken into account by the High Court while deciding the petition. Moreover, it is unclear how reliance on the High Court Rules helped the Court decide against the petitioner. Third parties may be entitled to obtain a copy of a judgment, but the motivation behind a right to be forgotten is to only make information less accessible, when it is determined that there is no countervailing interest in its publication. At its root, the right is intended to enable citizens to exercise greater control over their personal information, allowing them to live without the fear that a single Google search could jeopardise their professional or personal prospects.

Karnataka High Court

Less than three weeks after the Gujarat High Court’s decision, the Karnataka High Court ordered its Registry to redact the name of the petitioner’s daughter from the cause title as well as the body of an order before handing out copies of it to any ‘service provider’. It accepted the petitioner’s contention that a name-wise search on a search engine might throw up the order, adversely affecting his daughter’s reputation and relationship with her husband. The Court clarified that the name need not be redacted from the order published on the Court’s official website.

Towards the end, it remarked that such an action was ‘in line with the trend in Western countries’ where the right to be forgotten exists as a rule in ‘sensitive cases involving women in general and highly sensitive cases involving rape or affecting the modesty and reputation of the person concerned’.

This statement is problematic. The right to be forgotten emanates from the right to privacy and data protection, which are both regarded as fundamental rights in Europe. Basing the right on ideas of honour and modesty [of women] creates some cause for concern. Further, an important distinction between this case and the one before the Gujarat High Court is that neither Google nor any website publishing court judgments were made parties to it. The claim was based on redaction of information from the source, rather than de-listing it from search engine results or deleting it from a website. This is interesting, because it allows us to think of the right to be forgotten as a comprehensive concept, instead of a singular right to de-list information from search engine results. It provides courts with a choice, allowing them to opt for the least restrictive means to secure an individual’s right to online privacy.

However, the lack of a clear legal basis to allow or deny such claims raises cause for concern. As is already apparent, different high courts are likely to take divergent views on the right to be forgotten in the absence of an overarching data protection framework that grants such rights and prescribes limits to them. In several cases, the right to be forgotten will trigger a corresponding right to freedom of expression and the right to know. The criteria to balance these important but competing claims should be in place for courts to be able to decide such requests in a just manner.

The Supreme Court Hears Sabu Mathew George v. Union of India – Another Blow for Intermediary Liability

The Supreme Court heard arguments in Sabu Mathew George v. Union of India today. This writ petition was filed in 2008, with the intention of banning ‘advertisement’ offering sex selective abortions and related services, from search engine results. According to the petitioner, these advertisements violate Section 22 of the Pre-Conception and Pre-Natal Diagnostic Techniques (Regulation and Prevention of Misuse Act), 1994 (‘PCPNDT Act’) and consequently, must be taken down.

A comprehensive round up of the issues involved and the Court’s various interim orders can be found here. Today’s hearing focused mainly on three issues – the setting up of the Nodal Agency that is entrusted with providing details of websites to be blocked by search engines, the ambit and scope of the word ‘advertisement’ under the PCPNDT Act and thirdly, the obligation of search engines to find offending content and delete it on their own, without a government directive or judicial order to that effect.

Appearing for the Central Government, the Solicitor General informed the Court that as per its directions, a Nodal Agency has now been constituted. An affidavit filed by the Centre provided details regarding the agency, including contact details, which would allow individuals to bring offending content to its notice. The Court was informed that Agency would be functional within a week.

On the second issue, the petitioner’s counsel argued that removal of content must not be limited only to paid or commercial advertisements, but also other results that induce or otherwise lead couples to opt for sex selective abortions. This was opposed by Google and Yahoo! who contended that organic search results must not be tampered with, as the law only bans ‘advertisements’. Google’s counsel averred that the legislation could never have intended to remove generic search results, which directly facilitate information and research. On the other hand, the Solicitor General argued that that the word ‘advertisement’ should be interpreted keeping the object of the legislation in mind – that is, to prevent sex-selective abortions. On behalf of Microsoft, it was argued that even if the broadest definition of ‘advertisement’ was adopted, what has to be seen is the animus – whether its objective is to solicit sex selective abortions, before content could be removed.

On the third issue, the counsel for the petitioner argued that search engines should automatically remove offending content – advertisements or otherwise, even in the absence of a court order or directions from the Nodal Agency. It was his contention that is was not feasible to keep providing search engines with updated keywords and/or results and the latter should employ technical means to automatically block content. This was also echoed by the Court. On behalf of all search engines, it was pointed out that removal of content without an order from a court or the government was directly against the Supreme Court’s judgment in Shreya Singhal v. Union of India. In this case, the Court had read down Section 79 of the Information Technology Act 2000 (‘IT Act’) to hold that intermediaries are only required to take down content pursuant to court orders or government directives. The Court seemed to suggest  that Shreya Singhal was decided in the context of a criminal offence (Section 66A of the IT Act) and is distinguishable on that ground.

Additionally, it was also pointed out that even if the respondents were to remove content on their own, the lack of clarity over what constitutes as an ‘advertisement’ prevents them from deciding what content to remove. Overbroad removal of content might open them up to more litigation from authors and researchers with informative works on the subject. The Court did not offer any interpretation of its own, except to say that the ‘letter and spirit’ of the law must be followed. The lack of clarity on what is deemed illegal could, as pointed out by several counsels, lead to censorship of legitimate information.

Despite these concerns, in its order today, the Court has directed every search engine to form an in-house expert committee that will, based “on its own understanding” delete content that is violative of Section 22 of the PCPNDT Act. In case of any conflict, these committees should approach the Nodal Agency for clarification and the latter’s response is meant to guide the search engines’ final decision. The case has been adjourned to April, when the Court will see if the mechanism in place has been effective in resolving the petitioner’s grievances.