The Supreme Court’s Free Speech To-Do List

Written by the Civil Liberties team at CCG

The Supreme Court of India is often tasked with adjudicating disputes that shape the course of free speech in India. Here’s a roundup up of some key cases currently before the Supreme Court.

Kamlesh Vaswani vs. Union of India

A PIL petition was filed in 2013 seeking a ban on pornography in India. The petition also prayed for a direction to the Union Government to “treat watching of porn videos and sharing as non-bailable and cognizable offence.”

During the course of the proceedings, the Department of Telecommunications ordered ISPs to block over 800 websites allegedly hosting pornographic content. This was despite the freedom of expression and privacy related concerns raised before the Supreme Court. The Government argued that the list of websites had been submitted to the DoT by the petitioners, who blocked the websites without any verification. The ban was revoked after much criticism.

The case, currently pending before the Supreme Court, also presented implications for the intermediary liability regime in India. Internet Service Providers may claim safe harbor from liability for content they host, as long as they satisfy certain due diligence requirements under Sec. 79 of the IT Act, read with the Information Technology (Intermediaries Guidelines) Rules, 2011. After the Supreme Court read down these provisions in Shreya Singhal v. Union of India, the primary obligation is to comply with Court orders seeking takedown of content. The petition before the Supreme Court seeks to impose an additional obligation on ISPs to identify and block all pornographic content, or risk being held liable. Our work on this case can be found here.

Sabu Mathew George vs. Union of India

This is a 2008 case, where a writ petition was filed to ban ‘advertisements’ relating to pre-natal sex determination from search engines in India. Several orders have been passed, and the state has now created a nodal agency that would provide search engines with details of websites to block. The ‘doctrine of auto-block’ is an important consideration in this case -in one of the orders the Court listed roughly 40 search terms and stated that respondents should ensure that any attempt at looking up these terms would be ‘auto-blocked’, which raises concerns about intermediary liability and free speech.

Currently, a note has been filed by the petitioners advocate, which states that search engines have the capacity to takedown such content, and even upon intimation, only end up taking down certain links and not others. Our work on this case can be found on the following links – 1, 2, 3.

Prajwala vs. Union of India

This is a 2015 case, where an NGO (named Prajwala) sent the Supreme Court a letter raising concerns about videos of sexual violence being distributed on the internet. The letter sought to bring attention to the existence of such videos, as well as their rampant circulation on online platforms.

Based on the contents of the letter, a suo moto petition was registered. Google, Facebook, WhatsApp, Yahoo and Microsoft were also impleaded as parties. A committee was constituted to “assist and advise this Court on the feasibility of ensuring that videos depicting rape, gang rape and child pornography are not available for circulation” . The relevant order, which discusses the committee’s recommendations can be found here. One of the stated objectives of the committee was to examine technological solutions to the problem – for instance, auto-blocking. This raises issues related to intermediary liability and free speech.

 

Advertisements

Call for Applications – Civil Liberties

The Centre for Communication Governance at the National Law University Delhi (CCG) invites applications for research positions in its Civil Liberties team on a full time basis.

About the Centre

The Centre for Communication Governance is the only academic research centre dedicated to working on the information law and policy in India and in a short span of four years has become a leading centre on information policy in Asia. It seeks to embed human rights and good governance within communication policy and protect digital rights in India through rigorous academic research and capacity building.

The Centre routinely works with a range of international academic institutions and policy organizations. These include the Berkman Klein Center at Harvard University, the Programme in Comparative Media Law and Policy at the University of Oxford, the Center for Internet and Society at Stanford Law School, Hans Bredow Institute at the University of Hamburg and the Global Network of Interdisciplinary Internet & Society Research Centers. We engage regularly with government institutions and ministries such as the Law Commission of India, Ministry of Electronics & IT, Ministry of External Affairs, the Ministry of Law & Justice and the International Telecommunications Union. We work actively to provide the executive and judiciary with useful research in the course of their decision making on issues relating to civil liberties and technology.

CCG has also constituted two advisory boards, a faculty board within the University and one consisting of academic members of our international networks. These boards will oversee the functioning of the Centre and provide high level inputs on the work undertaken by CCG from time to time.

About Our Work

The work at CCG is designed to build competence and raise the quality of discourse in research and policy around issues concerning civil liberties and the Internet, cybersecurity and global Internet governance. The research and policy output is intended to catalyze effective, research-led policy making and informed public debate around issues in technology and Internet governance.

The work of our civil liberties team covers the following broad areas:

  1. Freedom of Speech & Expression: Research in this area focuses on human rights and civil liberties in the context of the Internet and emerging communication technology in India. Research on this track squarely addresses the research gaps around the architecture of the Internet and its impact on free expression.
  2. Access, Markets and Public Interest: The research under this area will consider questions of access, including how the human right to free speech could help to guarantee access to the Internet. It would identify areas where competition law would need to intervene to ensure free, fair and human rights-compatible access to the Internet, and opportunities to communicate using online services. Work in this area will consider how existing competition and consumer protection law could be applied to ensure that freedom of expression in new media, and particularly the internet, is protected given market realities on the supply side. We will under this track put out material regarding the net neutrality concerns that are closely associated to the competition, innovation, media diversity and protection of human rights especially rights to free expression and the right to receive information and particularly to substantive equality across media. It will also engage with existing theories of media pluralism in this context.
  3. Privacy, Surveillance & Big Data: Research in this area focuses on surveillance as well as data protection practices, laws and policies. The work may be directed either at the normative questions that arise in the context of surveillance or data protection, or at empirical work, including data gathering and analysis, with a view to enabling policy and law makers to better understand the pragmatic concerns in developing realistic and effective privacy frameworks. This work area extends to the right to be forgotten and data localization.

Role

CCG is a young and continuously evolving organization and the members of the centre are expected to be active participants in building a collaborative, merit led institution and a lasting community of highly motivated young researchers.

Selected applicants will ordinarily be expected to design and produce units of publishable research with Director(s)/ senior staff members. They will also be recommending and assisting with designing and executing policy positions and external actions on a broad range of information policy issues.

Equally, they will also be expected to participate in other work, including writing opinion pieces, blog posts, press releases, memoranda, and help with outreach. The selected applicants will also represent CCG in the media and at other events, roundtables, and conferences and before relevant governmental, and other bodies. In addition, they will have organizational responsibilities such as providing inputs for grant applications, networking and designing and executing Centre events.

Qualifications

The Centre welcomes applications from candidates with advanced degrees in law, public policy and international relations.

  • All candidates must preferably be able to provide evidence of an interest in human rights / technology law and / or policy / Internet governance/ national security law as well. In addition, they must have a demonstrable capacity for high-quality, independent work.
  • In addition to written work, a project/ programme manager within CCG will be expected to play a significant leadership role. This ranges from proactive agenda-setting to administrative and team-building responsibilities.
  • Successful candidates for the project / programme manager position should show great initiative in managing both their own and their team’s workloads. They will also be expected to lead and motivate their team through high stress periods and in responding to pressing policy questions.

However, the length of your resume is less important than the other qualities we are looking for. As a young, rapidly-expanding organization, CCG anticipates that all members of the Centre will have to manage large burdens of substantive as well as administrative work in addition to research. We are looking for highly motivated candidates with a deep commitment to building information policy that supports and enables human rights and democracy.

At CCG, we aim very high and we demand a lot of each other in the workplace. We take great pride in high-quality outputs and value individuality and perfectionism. We like to maintain the highest ethical standards in our work and workplace, and love people who manage all of this while being as kind and generous as possible to colleagues, collaborators and everyone else within our networks. A sense of humour will be most welcome. Even if you do not necessarily fit requirements mentioned in the two bulleted points but bring to us the other qualities we look for, we will love to hear from you.

[The Centre reserves the right to not fill the position(s) if it does not find suitable candidates among the applicants.]

Positions

Based on experience and qualifications, successful applicants will be placed in the following positions. Please note that our interview panel has the discretion to determine which profile would be most suitable for each applicant.

  • Programme Officer (2-4 years’ work experience)
  • Project Manager (4-6 years’ work experience)
  • Programme Manager (6-8 years’ work experience)

A Master’s degree from a highly regarded programme might count towards work experience.

CCG staff work at the Centre’s offices at National Law University Delhi’s campus. The positions on offer are for duration of one year and we expect a commitment for two years.

Remuneration

The salaries will be competitive, and will usually range from ₹50,000 to ₹1,20,000 per month, depending on multiple factors including relevant experience, the position and the larger research project under which the candidate can be accommodated.

Where candidates demonstrate exceptional competence in the opinion of the interview panel, there is a possibility for greater remuneration.

Procedure for Application

Interested applicants are required to send the following information and materials by December 30, 2017 to ccgcareers@nludelhi.ac.in.

  1. Curriculum Vitae (maximum 2 double spaced pages)
  2. Expression of Interest in joining CCG (maximum 500 words).
  3. Contact details for two referees (at least one academic). Referees must be informed that they might be contacted for an oral reference or a brief written reference.
  4. One academic writing sample of between 1000 and 1200 words (essay or extract, published or unpublished).

Shortlisted applicants may be called for an interview.

 

An update on Sabu Mathew George vs. Union of India

Today, the Supreme Court heard the ongoing matter of Sabu Mathew George vs. Union of India. In 2008, a petition was filed to ban advertisements endorsing sex-selective abortions from search engine results. Advertisements endorsing sex selective abortions are illegal under Section 22 of the PNDT Act (The Pre-conception and Pre-natal Diagnostic Techniques Act), 1994 Act. Several orders have been passed over the last few years, the last of which was passed on April 13th, 2017. Following from these orders, the Court had directed the Centre to set up a nodal agency where complaints against sex selective ads could be lodged. The Court had also ordered the search engines involved to set up an in-house expert committee in this regard. The order dated April 13th stated that compliance with the mechanism in place would be checked hereinafter. Our blog posts covering these arguments and other issues relevant to search neutrality can be found here and here.

Today, the petitioners counsel stated that the nodal agency in question should be able to take suo moto cognisance of complaints, and not just restrict its functioning to the method prescribed previously. Currently, individuals can file complaints with the nodal agency, which will then be forwarded to the search engine in question. The relevant part from the order (16/11/16) is as follows:

“…we direct that the Union of India shall constitute a “Nodal Agency” and give due advertisement in television, newspapers and radio by stating that it has been created in pursuance of the order of this Court and anyone who comes across anything that has the nature of an advertisement or any impact in identifying a boy or a girl in any method, manner or mode by any search engine shall be brought to its notice. Once it is brought to the notice of the Nodal Agency, it shall intimate the concerned search engine or the corridor provider immediately and after receipt of the same, the search engines are obliged to delete it within thirty-six hours and intimate the Nodal Agency. Needless to say, this is an interim arrangement pending the discussion which we have noted herein-before…”

On the respondent’s side, the counsel stated that over the last few months, Microsoft had only received one complaint and Yahoo had not received any complaints, arguing that the nodal agency  would not have to take on a higher level of regulation. Further on the issue of suo moto cognisance, they stated that it would be untenable to expect a government agency to ‘tap’ into search results. As per the counsel, the last order had only contemplated checking with the compliance of the nodal agency system, and with constituting an expert committee, all of which had been established.

The petitioners stated that they would need more time and would suggest other measures for effective regulation.

The next hearing will take place on the 24th of November, 2017.

Understanding the ‘NetzDG’: Privatised censorship under Germany’s new hate speech law

By William James Hargreaves

The Network Enforcement Act

The Network Enforcement Act (NetzDG), a law passed on the 30th of June by the German Government operates to fine social media companies up to 50 million Euros – approximately 360 crore rupees – if they persistently fail to remove hate speech from their platform within 24 hours of the content being posted. Companies will have up to one week where the illegality of the content is debatable.

NetzDG is intended to hold social media companies financially liable for the opinions posited using their platform. The Act will effectively subject social media platforms to the stricter content standards demanded of traditional media broadcasters.

Why was the act introduced?

Germany is one the world’s strictest regulators of hate speech. The State’s Criminal Code covers issues of defamation, public threats of violence and incitement to illegal conduct, and provides for incarceration for Holocaust denial or inciting hatred against minorities. Germany is a country sensitive to the persuasive power of oratory in radicalizing opinion. The parameters of these sensitivities are being tested as the influx of more than one million asylum seekers and migrants has catalyzed a notably belligerent public discourse.

In response to the changing discourse, Facebook and a number of other social media platforms consented in December 2015 to the terms of a code of conduct drafted by the Merkel Government. The code of conduct was intended to ensure that platforms adhered to Germany’s domestic law when regulating user content. However, a study monitoring Facebook’s compliance found the company deleted or blocked only 39 percent of reported content, a rate that put Facebook in breach of the agreement.

NetzDG turns the voluntary agreement into a binding legal obligation, making Facebook liable for any future failure to adhere to it’s terms.

In a statement made following the law’s enactment, German Justice Minister Heiko Maas declared ‘With this law, we put an end to the verbal law of the jungle on the Internet and protect the freedom of expression for all… This is not a limitation, but a prerequisite for freedom of expression’. The premise of the position of Minister Maas, and the starting point for the principles that validate the illegality of hate speech, is that verbal radicalization is often time the precursor to physical violence.

As the world’s predominant social media platform, Facebook has curated unprecedented, and in some respects, unconditioned access to people and their opinions. With consideration for the extent of Facebook’s access, this post will focus on the possible effects of the NetzDG on Facebook and it’s users.

Facebook’s predicament

  • Regulatory methods

How Facebook intends to observe the NetzDG is unclear. The social media platform, whose users now constitute one-quarter of the world’s population, has previously been unwilling to disclose the details of their internal censorship processes. However given the potential financial exposure, and the sustained increase in user content, Facebook must, to some extent, increase their capacity to evaluate and regulate reported content. In response, Facebook announced in May that it would nearly double the number of employees tasked with removing content that violated their guidelines. Whether this increase in capacity will be sufficient will be determined in time.

However, and regardless of the move’s effectiveness, Facebook’s near doubling of capacity implies that human interpretation is the final authority, and that implication raises a number of questions: To what extent can manual censorship keep up with the consistent increase in content? Can the same processes maintain efficacy in a climate where hate speech is increasingly prevalent in public discourse? If automated censorship is necessary, who decides the algorithm’s parameters and how sensitive might those parameters be to the nuances of expression and interpretation? In passing the NetzDG, the German Government has relinquished the State’s authority to fully decide the answer to these questions. The jurisdiction of the State in matters of communication regulation has, to a certain extent, been privatised.

  • Censorship standards

Recently, an investigative journalism platform called ProPublica claimed possession of documents purported to be internal censorship guidelines used at Facebook. The unverified guidelines instructed employees to remove the phrase ‘migrants are filth’ but permit ‘migrants are filthy’. Whether the documents are legitimate is to some extent irrelevant: the documents provide a useful example of the specificity required where the aim is to guide one person’s interpretation of language toward a specific end – in this instance toward a correct judgment of legality or illegality.

Regardless of the degree of specificity, it is impossible for any formulation of guidelines to cover every possible manifestation of hate speech. Thereby interpreting reported content will necessarily require some degree of discretion. This necessity begs the question: to what extent will affording private entities discretionary powers of censorship impede freedoms of communication? Particularly where the discretion afforded is conditioned by financial risk and a determination is required within a 24-hour period.

  • Facebook’s position

Statements made by Facebook prior to the legislation’s enactment expressed concern for the effect the Act will have on the already complex issue of content moderation. ‘The draft law provides an incentive to delete content that is not clearly illegal when social networks face such a disproportionate threat of fine’ a statement noted. ‘(The Act) would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies’. Facebook’s reservation is telling: the company’s reluctance to adopt the role of moderator to the extent required alludes to the potential consequences of the liability imposed by the Act. 

The problem with imposing this form of liability

 Any decision made by a social media platform to censor user content will be supported by the anti-discrimination principles prescribed by the NetzDG. However, where the motivation behind discretionary decision-making shifts away from social utility towards financial management the guiding considerations become efficiency and risk minimisation. Efficiency and risk minimisation in this instance requires Facebook to either (i) increase capacity, which in turn results in an increased financial burden, or (ii) adopt guidelines that minimise exposure.

Seemingly the approach adopted by Facebook is to increase capacity. However, Facebook’s concerns that the Act creates financial incentives to adopt guidelines that minimise exposure are significant. Such concerns demonstrate an understanding that requiring profit motivated companies to do the work of the State within a 24-hour time frame will necessarily require a different set of parameters than those imposed on the regulation of oral hate speech. If Facebook, in drafting and applying those parameters, decides to err on the side of caution and, in some instances, censor otherwise legal content, that decision will have directly infringed the freedom of communication enjoyed by German citizens.

A democracy must be able to accommodate contrasting opinions if it purports to respect rights of communication and expression. Conversely, limitations on rights enjoyed may be justified if they benefit the majority. The NetzDG is Germany’s recognition that the nature of online communication – the speed at which ideas promulgate and proliferate, and the disconnect between comment and consequence created by online anonymity – require the existing limitations on the freedom of communication be adapted. Whether instances of infringement, are warranted in the current climate is a difficult and complicated extension of the debate between the utility of regulating hate speech and the corresponding consequences for the freedoms of communication and expression. The decision to pass the NetzDG suggests the German Government considers the risk of infringement is acceptable when measured against the consequences of unfettered hate speech.

Public recognition that NetzDG poses a risk is important. It is best practice that within a democracy, any new limit to liberty, oral or otherwise, be questioned and a justification given. Here the justification seems well-founded. However the answers to the questions posed by sceptics may prove telling as Germany positions itself at the forefront of the debate over online censorship.

(William is a student at the University of Melbourne and is currently interning at CCG)

How (not) to get away with murder: Reviewing Facebook’s live streaming guidelines

Introduction

The recent shooting in Cleveland live streamed on Facebook has brought the social media company’s regulatory responsibilities into question. Since the launch of Facebook Live in 2016, the service’s role in raising political awareness has been acknowledged. However, the service has also been used to broadcast several instances of graphic violence.

The streaming of violent content (including instances of suicide, murders and gang rapes) has raised serious questions about Facebook’s responsibility as an intermediary. While it is not technically feasible for Facebook to review all live videos while they’re being streamed or filter them before they’re streamed, the platform does have a routine procedure in place to take down such content. This post will visit the guidelines in place to take down live streamed content and discuss alternatives to the existing reporting mechanism.

What guidelines are in place?

Facebook has ‘community standards’ in place.  However, their internal regulation methods are unknown to the public. Live videos have to be in compliance with ‘community standards’, which specifies that Facebook will remove content relating to ‘direct threats’, self-injury’, ‘dangerous organizations’, ‘bullying and harassment’, ‘attacks on public figures’, ‘criminal activity’ and ‘sexual violence and exploitation’.

The company has stated that it ‘only takes one report for something to be reviewed’.  This system of review has been criticized since graphic content could go unnoticed without a report. In addition, this form of reporting would be unsuccessful since there is no mandate of ‘compulsory reporting’ for the viewers.  Incidentally, the Cleveland shooting video was not detected by Facebook until it was flagged as ‘offensive’, which was a couple of hours after the incident. The company has also stated that they are working on developing ‘artificial intelligence’ that could help put an end to these broadcasts. However, they currently rely on the reporting mechanism, where ‘thousands of people around the world’ review posts that have been reported against. The reviewers check if the content goes against the ‘community standards’ and ‘prioritize videos with serious safety implications’.

While deciding if a video should be taken down, the reviewers will also take the ‘context and degree’ of the content into consideration. For instance, content that is aimed at ‘raising awareness’, even if it displays violence, will be allowed. However, content that is celebrating such violence would be taken down. To demonstrate, when a live video of civilian Philando Castile being shot by a police officer in Minnesota went viral, Facebook kept the video up on their platform, stating that it did not glorify the violent act.

 Regulation

Other than the internal guidelines by which Facebook regulates itself, there haven’t been instances of government regulators, like the United States’ Federal Communications Commission intervening. Unlike the realm of television, where the FCC regulates content and deems material ‘inappropriate’, social media websites are protected from content regulation.

This brings up the question of intermediary liability and Facebook’s liability for hosting graphic content. Under American Law, there is a distinction between ‘publishers’ and ‘common carriers’. A common carrier only ‘enables communications’ and does not ‘publish content’. If a platform edits content, it is most likely a publisher. A ‘publisher’ has a higher level of responsibility for content hosted on their platform, unlike a ‘carrier’. In most instances, social media companies are covered under Section 230 of the Communications Decency Act, a safe harbor provision, by which they would not be held liable for third-party content.  However, questions have been raised about Facebook’s role as a ‘publisher’ or ‘common carrier’, and there seems to be no conclusive answer.

Conclusion

Several experts have considered possible solutions to this growing problem. Some believe that such features should be limited to certain partners and should be opened up to the public once additional safeguards and better artificial intelligence technologies are in place. In these precarious situations, enforcing stricter laws on intermediaries might not resolve the issue at hand. Some jurisdictions have ‘mandatory reporting’ provisions, specifically for crimes of sexual assault. In India, under Section 19 of the Protection of Children from Sexual Offences Act, 2012 ‘any person who has apprehension that an offence…is likely to be committed or has knowledge that such an offence has been committed’ has to report such an offence. In the context of cyber-crimes, this system of ‘mandatory reporting’ would shift the onus on the viewers and supplement the existing reporting system. Mandatory provisions of this nature do not exist in the United States where most of the larger social media companies are based.

Similarly, possible solutions should focus on strengthening the existing reporting system, rather than holding social media platforms liable.

The Supreme Court Hears Sabu Mathew George v. Union of India – Another Blow for Intermediary Liability

The Supreme Court heard arguments in Sabu Mathew George v. Union of India today. This writ petition was filed in 2008, with the intention of banning ‘advertisement’ offering sex selective abortions and related services, from search engine results. According to the petitioner, these advertisements violate Section 22 of the Pre-Conception and Pre-Natal Diagnostic Techniques (Regulation and Prevention of Misuse Act), 1994 (‘PCPNDT Act’) and consequently, must be taken down.

A comprehensive round up of the issues involved and the Court’s various interim orders can be found here. Today’s hearing focused mainly on three issues – the setting up of the Nodal Agency that is entrusted with providing details of websites to be blocked by search engines, the ambit and scope of the word ‘advertisement’ under the PCPNDT Act and thirdly, the obligation of search engines to find offending content and delete it on their own, without a government directive or judicial order to that effect.

Appearing for the Central Government, the Solicitor General informed the Court that as per its directions, a Nodal Agency has now been constituted. An affidavit filed by the Centre provided details regarding the agency, including contact details, which would allow individuals to bring offending content to its notice. The Court was informed that Agency would be functional within a week.

On the second issue, the petitioner’s counsel argued that removal of content must not be limited only to paid or commercial advertisements, but also other results that induce or otherwise lead couples to opt for sex selective abortions. This was opposed by Google and Yahoo! who contended that organic search results must not be tampered with, as the law only bans ‘advertisements’. Google’s counsel averred that the legislation could never have intended to remove generic search results, which directly facilitate information and research. On the other hand, the Solicitor General argued that that the word ‘advertisement’ should be interpreted keeping the object of the legislation in mind – that is, to prevent sex-selective abortions. On behalf of Microsoft, it was argued that even if the broadest definition of ‘advertisement’ was adopted, what has to be seen is the animus – whether its objective is to solicit sex selective abortions, before content could be removed.

On the third issue, the counsel for the petitioner argued that search engines should automatically remove offending content – advertisements or otherwise, even in the absence of a court order or directions from the Nodal Agency. It was his contention that is was not feasible to keep providing search engines with updated keywords and/or results and the latter should employ technical means to automatically block content. This was also echoed by the Court. On behalf of all search engines, it was pointed out that removal of content without an order from a court or the government was directly against the Supreme Court’s judgment in Shreya Singhal v. Union of India. In this case, the Court had read down Section 79 of the Information Technology Act 2000 (‘IT Act’) to hold that intermediaries are only required to take down content pursuant to court orders or government directives. The Court seemed to suggest  that Shreya Singhal was decided in the context of a criminal offence (Section 66A of the IT Act) and is distinguishable on that ground.

Additionally, it was also pointed out that even if the respondents were to remove content on their own, the lack of clarity over what constitutes as an ‘advertisement’ prevents them from deciding what content to remove. Overbroad removal of content might open them up to more litigation from authors and researchers with informative works on the subject. The Court did not offer any interpretation of its own, except to say that the ‘letter and spirit’ of the law must be followed. The lack of clarity on what is deemed illegal could, as pointed out by several counsels, lead to censorship of legitimate information.

Despite these concerns, in its order today, the Court has directed every search engine to form an in-house expert committee that will, based “on its own understanding” delete content that is violative of Section 22 of the PCPNDT Act. In case of any conflict, these committees should approach the Nodal Agency for clarification and the latter’s response is meant to guide the search engines’ final decision. The case has been adjourned to April, when the Court will see if the mechanism in place has been effective in resolving the petitioner’s grievances.

Roundup of Sabu Mathew George vs. Union of India: Intermediary liability and the ‘doctrine of auto-block’

Introduction

In 2008, Sabu Matthew George, an activist, filed a writ petition to ban ‘advertisements’ relating to pre-natal sex determination from search engines in India. According to the petitioner, the display of these results violated Section 22 of the Pre-Natal Diagnostic Techniques (Regulation and Prevention of Misuse Act), 1994. From 2014-2015, the Supreme Court ordered the respondents to block these advertisements several times. Finally, on November 16, 2016, the Supreme Court ordered the respondents, Google, Microsoft and Yahoo to ‘auto-block’ advertisements relating to sex selective determination. They also ordered the creation of a ‘nodal agency’ that would provide search engines with the details of websites to block. The next hearing for this case is scheduled for February 16, 2017.

The judgment has been criticised for over-breadth and the censorship of legitimate content. We discuss some issues with the judgment below.

Are search engines ‘conduits’ or ‘content-providers’?

An earlier order in this case, dated December 4, 2012, states that the respondents argued that they “provided a corridor and did not have any control” over the information hosted on other websites.

There is often confusion surrounding the characterization of search engines as either ‘conduits’ or ‘content-providers’. A conduit is a ‘corridor’ for information, otherwise known as an intermediary. A content provider however, produces/alters the displayed content. It has been suggested by authors like Frank Pasquale that search engines (Google specifically) take advantage of this grey area by portraying themselves as conduits or content-providers, to avoid liability. For instance, Google will likely portray itself as a content-provider when it needs to claim First Amendment protection in the United States, and as a conduit for information when it needs to defend itself against First Amendment attacks. When concerns related to privacy arise, search engines attempt to claim editorial rights and freedom of expression. Conflictingly, when intellectual property matters or defamation claims arise, they portray themselves as ‘passive conduits’.

In the Indian context, there has been similar dissonance about the characterization of search engines. In the aftermath of the Sabu Mathew George judgment, the nature of search engines was debated by a few. One commentator has pointed out that the judgment would contradict the Supreme Court’s decision reading down Section 79(3)(b) of the Information Technology Act, 2008 (IT Act) in Shreya Singhal vs. Union of India, where the liability of intermediaries was restricted. Therefore, the commentator characterized search engines as passive conduits/intermediaries. According to the commentator, the Sabu Mathew George judgment would effectively hold intermediaries liable for content hosted unbeknownst to them. Another commentator has criticised this argument, stating that if Google willingly publishes advertisements through its AdWords system, then it is a publisher and not merely an intermediary. This portrays Google as a content-provider.

Sabu Mathew George defies existing legal standards 

As mentioned above, the Sabu Mathew George judgment contradicts the Supreme Court’s decision in Shreya Singhal, where the liability of intermediaries was read down under Section 79 (3) (b) of the IT Act. The Court in Shreya Singhal held that intermediaries would only be compelled to takedown content through court orders/government notifications. However, in the present case, the Supreme Court has repeatedly ordered the respondents to devise ways to monitor and censor their own content and even resort to ‘auto-blocking’ results.

The order dated November 16, 2016 also contradicts the Blocking Rules under the Information Technology Act, 2008. In the order, the Supreme Court directed the Center to create a ‘nodal agency’ which would allow people to register complaints against websites violating Section 22 of the PNDT Act. These complaints would then be passed on the concerned search engine in the manner described below-

Once it is brought to the notice of the Nodal Agency, it shall intimate the concerned search engine or the corridor provider immediately and after receipt of the same, the search engines are obliged to delete it within thirty-six hours and intimate the Nodal Agency.”

The functioning of this nodal agency would circumvent the Information Technology Act Blocking Rules. Under the Blocking Rules, the Committee for Examination of Requests reviews each blocking request and verifies whether it is in line with Section 69 of the IT Act. The Sabu Mathew George order does not prescribe a similar review system. While the author acknowledges that the nodal agency’s blocking rules are not a statutory mandate, its actions could still lead to over-blocking.

Organic search results’ and ‘sponsored links

One important distinction in this case is between ‘organic search results’ and ‘sponsored links’. A submission by MeitY (DeitY) explaining the difference between the two was not addressed by the Supreme Court in the order dated December 4, 2014.

Section 22 of the PNDT Act criminalizes the display of ‘advertisements’, but does not offer a precise definition for the term. The respondents argued that ‘advertisement’ would relate to ‘sponsored links’ and not ‘organic search results’. As per the order dated September 19, 2016, Google and Microsoft agreed to remove ‘advertisements’ and stated that search results should not be contemplated under Section 22 since they are not ‘commercial communication’. However, on November 16, 2016, the Supreme Court stated that the block would extend to both ‘sponsored links’ and ‘organic search results’.  The respondents expressed concern against this rationale stating that legitimate information on pre-natal sex determination would be unavailable, and that the ‘freedom of access to information’ would be restricted. The Court stated that this freedom could be curbed for the sake of the larger good.

The ‘doctrine of auto-block’

By the order dated September 19, 2016, the Court discussed the ‘doctrine of auto block’ and the responsibility of the respondents to block illegal content themselves. In this order, the Court listed roughly 40 search terms and stated that the respondents should ensure that any attempt at looking up these terms would be ‘auto-blocked’. The respondents also agreed to disable the ‘auto complete’ feature for these terms.

Google has blocked search terms from their auto-complete system in several other countries, often with little success. This article points out that illegal search terms relating to child pornography have been allowed on auto-complete while more innocuous terms like ‘homosexual’ have been blocked by Bing, proving that this system of blocking has several discrepancies.

Other than a chilling effect on free speech, disabling auto complete can also lead to other adverse effects. In one instance, the owner of a sex-toy store complained about her business not benefitting from the autocomplete feature, like several others had. She stated that …Google is … making it easier for people to find really specific information related to a search term. In a sense it’s like we’re not getting the same kind of courtesy of that functionality. Similarly, several legitimate websites discussing pre-natal sex determination might lose potential readers or viewers if ‘autocomplete’ is disabled.

Conclusion

The author would like to make two broad suggestions. First, the functioning of the nodal agency should be revisited. The recommended system lacks accountability and transparency and will certainly lead to over-blocking and will also lead to a chilling effect.

Second, search engines should not be given over-arching powers to censor their own websites. It is well-established that this leads to over-censorship. In addition to contradicting Section 79(3)(b) of the IT Act, the Court would also be delegating judicial authority to a private search engine.

According to a study conducted by The Centre for Internet & Society, Bangalore in January, 2015, searching for keywords relating to pre-natal sex determination on Google, Yahoo and Bing did not yield a large number of ‘organic search results’ and ‘sponsored links’ that would violate Section 22 of the PNDT Act. From 2015-2016, search engines have presumably followed Supreme Court orders and filtered out illegal search results and advertisements. Since instances of illegal search results and advertisements being displayed were not rampant to begin with,  there seems to be no urgent need to impose strict measures like ‘auto-blocks’.

The Supreme Court seems to be imposing similarly arbitrary rules upon search engines in other judgments. Recently, the Court ordered Google, Microsoft and Yahoo to create a ‘firewall’ that would prevent illegal videos from being uploaded to the internet.  They cited the example of China creating a similar firewall to prove the feasibility of the order.