Pachauri defamation suit: Court rejects interim gag order plea

The Patiala House court at Delhi has rejected R. K. Pachauri’s plea for an interim gag order against NDTV, Bennett Coleman and Co., and the India Today Group. The media houses had been made defendants in a defamation suit filed by him in 2016.

In 2015, an FIR had been filed against Pachauri by a woman employee of TERI (The Energy and Resources Institute, of which he was then the Chief) accusing him of sexual harassment. Following these allegations, several other women had spoken out about similar experiences while they had worked at the organization. The allegations and ongoing proceedings had received extensive coverage in the media.

Pachauri filed for defamation against multiple parties, including the media houses, one of the women who had spoken out, as well as her lawyer. He sought a gag order against the media houses, and damages of Rs. 1 Crore from the victim and her lawyer.

We have written previously about how suits such as these are in the nature of ‘SLAPP’ suits – Strategic Lawsuits Against Public Participation. These are cases where powerful individuals and corporations use litigation as a way of intimidating or silencing their critics. The defendants are usually media houses or individuals who are then forced to muster the resources to mount a legal defense. Even if they are able to secure a victory in Court, it is at the cost of a protracted and expensive process.

The court has now refused to grant an interim injunction against the media houses, noting the right of the public to be aware of the developments. It further noted that public figures can be held to a higher degree of scrutiny by the public. However, it has also held that further reportage must also carry Pachauri’s views, and indicate that the matter is still pending before the Court. The text of the order may be found here.

Advertisements

Facebook and its (dis)contents

In 2016, Norwegian writer Tom Egeland, uploaded a post on Facebook, listing seven photographs that “changed the history of warfare”. The post featured the Pulitzer-winning image, ‘The Terror of War’, which depicts a naked nine-year-old running from a napalm attack during the Vietnam War. Facebook deleted the post, and suspended Egeland’s account.

A Norwegian newspaper, Aftenposten, while reporting on the suspension, used the same image on its Facebook page. The newspaper soon received a message from Facebook demanding that the image be either removed, or pixelated. The editor-in-chief refused to comply in an open letter to Mark Zuckerburg, noting his concern at the immense power Facebook wielded over speech online. The issue escalated when several Norwegian politicians, including the Prime Minister, shared the image on Facebook, and were temporarily suspended from Facebook as well.

Facebook initially stated that it would be difficult to create a distinction between instances where a photograph of a nude child could be allowed. However, due to widespread censure, the platform eventually decided to reinstate the image owing to its “status as an iconic image of historical importance.”

This incident brought to light the tricky position Facebook finds itself in as it attempts to police its platform. Facebook addresses illegal and inappropriate content through a mix of automated processes, and human moderation. The company publishes guidelines about what content may not be appropriate for its platform, called its ‘Community Standards.’ Users can ‘flag’ content that they think does not meet the Community Standards, which is then reviewed by moderators. Moderators may delete, ignore, or escalate flagged content to a senior manager. In some cases, the user account may be suspended, or asked to submit identity verification.

As evident from the ‘Terrors of War’ incident, Facebook has often come under fire for supposed ‘wrong’ moderation of content, as well as opacity in how its community review process comes to be applied. It has been argued that content that is evidently in violation of Community Standards is often not taken down, while content that should be safe is censored. For instance, Facebook courted controversy again, when it was accused of blocking content and accounts documenting persecution of the Rohingya Muslim community in Myanmar.

Closer home as well, multiple instances of Facebook’s questionable moderation practices have come to light. In October 2017, Raya Sarkar, a law student based out of the United States, had created what came to be called, the List. The List named over 70 prominent academics that had been accused of sexual harassment. The approach proved extremely controversial, sparking debates about due process, and the failure of institutional mechanisms to address harassment. Facebook blocked her account for seven days, which proved equally contentious. Sarkar’s account was restored only after Facebook staff in Palo Alto were contacted directly. Similar instances have been reported of seemingly arbitrary application of the Community Standards. In many cases accounts have been suspended, and content blocked without notice, explanation or recourse.

Content moderation inherently involves much scope for interpretation and disagreement. Factors such as context, as well as cultural differences, render it a highly subjective exercise. Algorithms don’t appear to have reached sufficient levels of sophistication, and there exist larger issues associated with automated censoring of speech. Human moderators are by all accounts burdened by the volume and the psychologically taxing nature of the work, and therefore prone to error. The way forward should therefore be first, to ensure that transparent mechanisms exist for recourse against the removal of legitimate speech.

In light of the ‘Terror of War’ incident, Facebook responded by updating its community standards. In a statement, it said that it would allow graphic material that would be “newsworthy, significant, or important to the public interest — even if they might otherwise violate our standards.” Leaked moderator guidelines in 2017 opened the company up to granular public critique of its policies. There is evidently scope for Facebook to be more responsive and consultative in how it regulates speech online.

In June 2017, Facebook reached 2 billion monthly users, making it the largest social network, and a platform for digital interaction without precedent. It has announced plans to reach 5 billion. With the influence it now wields, it must also embrace its responsibility to be more transparent and accountable to its users.

Call for Applications – Civil Liberties

Update: Deadline to apply extended to January 15, 2018! 

The Centre for Communication Governance at the National Law University Delhi (CCG) invites applications for research positions in its Civil Liberties team on a full time basis.

About the Centre

The Centre for Communication Governance is the only academic research centre dedicated to working on the information law and policy in India and in a short span of four years has become a leading centre on information policy in Asia. It seeks to embed human rights and good governance within communication policy and protect digital rights in India through rigorous academic research and capacity building.

The Centre routinely works with a range of international academic institutions and policy organizations. These include the Berkman Klein Center at Harvard University, the Programme in Comparative Media Law and Policy at the University of Oxford, the Center for Internet and Society at Stanford Law School, Hans Bredow Institute at the University of Hamburg and the Global Network of Interdisciplinary Internet & Society Research Centers. We engage regularly with government institutions and ministries such as the Law Commission of India, Ministry of Electronics & IT, Ministry of External Affairs, the Ministry of Law & Justice and the International Telecommunications Union. We work actively to provide the executive and judiciary with useful research in the course of their decision making on issues relating to civil liberties and technology.

CCG has also constituted two advisory boards, a faculty board within the University and one consisting of academic members of our international networks. These boards will oversee the functioning of the Centre and provide high level inputs on the work undertaken by CCG from time to time.

About Our Work

The work at CCG is designed to build competence and raise the quality of discourse in research and policy around issues concerning civil liberties and the Internet, cybersecurity and global Internet governance. The research and policy output is intended to catalyze effective, research-led policy making and informed public debate around issues in technology and Internet governance.

The work of our civil liberties team covers the following broad areas:

  1. Freedom of Speech & Expression: Research in this area focuses on human rights and civil liberties in the context of the Internet and emerging communication technology in India. Research on this track squarely addresses the research gaps around the architecture of the Internet and its impact on free expression.
  2. Access, Markets and Public Interest: The research under this area will consider questions of access, including how the human right to free speech could help to guarantee access to the Internet. It would identify areas where competition law would need to intervene to ensure free, fair and human rights-compatible access to the Internet, and opportunities to communicate using online services. Work in this area will consider how existing competition and consumer protection law could be applied to ensure that freedom of expression in new media, and particularly the internet, is protected given market realities on the supply side. We will under this track put out material regarding the net neutrality concerns that are closely associated to the competition, innovation, media diversity and protection of human rights especially rights to free expression and the right to receive information and particularly to substantive equality across media. It will also engage with existing theories of media pluralism in this context.
  3. Privacy, Surveillance & Big Data: Research in this area focuses on surveillance as well as data protection practices, laws and policies. The work may be directed either at the normative questions that arise in the context of surveillance or data protection, or at empirical work, including data gathering and analysis, with a view to enabling policy and law makers to better understand the pragmatic concerns in developing realistic and effective privacy frameworks. This work area extends to the right to be forgotten and data localization.

Role

CCG is a young and continuously evolving organization and the members of the centre are expected to be active participants in building a collaborative, merit led institution and a lasting community of highly motivated young researchers.

Selected applicants will ordinarily be expected to design and produce units of publishable research with Director(s)/ senior staff members. They will also be recommending and assisting with designing and executing policy positions and external actions on a broad range of information policy issues.

Equally, they will also be expected to participate in other work, including writing opinion pieces, blog posts, press releases, memoranda, and help with outreach. The selected applicants will also represent CCG in the media and at other events, roundtables, and conferences and before relevant governmental, and other bodies. In addition, they will have organizational responsibilities such as providing inputs for grant applications, networking and designing and executing Centre events.

Qualifications

The Centre welcomes applications from candidates with advanced degrees in law, public policy and international relations.

  • All candidates must preferably be able to provide evidence of an interest in human rights / technology law and / or policy / Internet governance/ national security law as well. In addition, they must have a demonstrable capacity for high-quality, independent work.
  • In addition to written work, a project/ programme manager within CCG will be expected to play a significant leadership role. This ranges from proactive agenda-setting to administrative and team-building responsibilities.
  • Successful candidates for the project / programme manager position should show great initiative in managing both their own and their team’s workloads. They will also be expected to lead and motivate their team through high stress periods and in responding to pressing policy questions.

However, the length of your resume is less important than the other qualities we are looking for. As a young, rapidly-expanding organization, CCG anticipates that all members of the Centre will have to manage large burdens of substantive as well as administrative work in addition to research. We are looking for highly motivated candidates with a deep commitment to building information policy that supports and enables human rights and democracy.

At CCG, we aim very high and we demand a lot of each other in the workplace. We take great pride in high-quality outputs and value individuality and perfectionism. We like to maintain the highest ethical standards in our work and workplace, and love people who manage all of this while being as kind and generous as possible to colleagues, collaborators and everyone else within our networks. A sense of humour will be most welcome. Even if you do not necessarily fit requirements mentioned in the two bulleted points but bring to us the other qualities we look for, we will love to hear from you.

[The Centre reserves the right to not fill the position(s) if it does not find suitable candidates among the applicants.]

Positions

Based on experience and qualifications, successful applicants will be placed in the following positions. Please note that our interview panel has the discretion to determine which profile would be most suitable for each applicant.

  • Programme Officer (2-4 years’ work experience)
  • Project Manager (4-6 years’ work experience)
  • Programme Manager (6-8 years’ work experience)

A Master’s degree from a highly regarded programme might count towards work experience.

CCG staff work at the Centre’s offices at National Law University Delhi’s campus. The positions on offer are for duration of one year and we expect a commitment for two years.

Remuneration

The salaries will be competitive, and will usually range from ₹50,000 to ₹1,20,000 per month, depending on multiple factors including relevant experience, the position and the larger research project under which the candidate can be accommodated.

Where candidates demonstrate exceptional competence in the opinion of the interview panel, there is a possibility for greater remuneration.

Procedure for Application

Interested applicants are required to send the following information and materials by December 30, 2017 to ccgcareers@nludelhi.ac.in.

  1. Curriculum Vitae (maximum 2 double spaced pages)
  2. Expression of Interest in joining CCG (maximum 500 words).
  3. Contact details for two referees (at least one academic). Referees must be informed that they might be contacted for an oral reference or a brief written reference.
  4. One academic writing sample of between 1000 and 1200 words (essay or extract, published or unpublished).

Shortlisted applicants may be called for an interview.

 

Understanding the ‘NetzDG’: Privatised censorship under Germany’s new hate speech law

By William James Hargreaves

The Network Enforcement Act

The Network Enforcement Act (NetzDG), a law passed on the 30th of June by the German Government operates to fine social media companies up to 50 million Euros – approximately 360 crore rupees – if they persistently fail to remove hate speech from their platform within 24 hours of the content being posted. Companies will have up to one week where the illegality of the content is debatable.

NetzDG is intended to hold social media companies financially liable for the opinions posited using their platform. The Act will effectively subject social media platforms to the stricter content standards demanded of traditional media broadcasters.

Why was the act introduced?

Germany is one the world’s strictest regulators of hate speech. The State’s Criminal Code covers issues of defamation, public threats of violence and incitement to illegal conduct, and provides for incarceration for Holocaust denial or inciting hatred against minorities. Germany is a country sensitive to the persuasive power of oratory in radicalizing opinion. The parameters of these sensitivities are being tested as the influx of more than one million asylum seekers and migrants has catalyzed a notably belligerent public discourse.

In response to the changing discourse, Facebook and a number of other social media platforms consented in December 2015 to the terms of a code of conduct drafted by the Merkel Government. The code of conduct was intended to ensure that platforms adhered to Germany’s domestic law when regulating user content. However, a study monitoring Facebook’s compliance found the company deleted or blocked only 39 percent of reported content, a rate that put Facebook in breach of the agreement.

NetzDG turns the voluntary agreement into a binding legal obligation, making Facebook liable for any future failure to adhere to it’s terms.

In a statement made following the law’s enactment, German Justice Minister Heiko Maas declared ‘With this law, we put an end to the verbal law of the jungle on the Internet and protect the freedom of expression for all… This is not a limitation, but a prerequisite for freedom of expression’. The premise of the position of Minister Maas, and the starting point for the principles that validate the illegality of hate speech, is that verbal radicalization is often time the precursor to physical violence.

As the world’s predominant social media platform, Facebook has curated unprecedented, and in some respects, unconditioned access to people and their opinions. With consideration for the extent of Facebook’s access, this post will focus on the possible effects of the NetzDG on Facebook and it’s users.

Facebook’s predicament

  • Regulatory methods

How Facebook intends to observe the NetzDG is unclear. The social media platform, whose users now constitute one-quarter of the world’s population, has previously been unwilling to disclose the details of their internal censorship processes. However given the potential financial exposure, and the sustained increase in user content, Facebook must, to some extent, increase their capacity to evaluate and regulate reported content. In response, Facebook announced in May that it would nearly double the number of employees tasked with removing content that violated their guidelines. Whether this increase in capacity will be sufficient will be determined in time.

However, and regardless of the move’s effectiveness, Facebook’s near doubling of capacity implies that human interpretation is the final authority, and that implication raises a number of questions: To what extent can manual censorship keep up with the consistent increase in content? Can the same processes maintain efficacy in a climate where hate speech is increasingly prevalent in public discourse? If automated censorship is necessary, who decides the algorithm’s parameters and how sensitive might those parameters be to the nuances of expression and interpretation? In passing the NetzDG, the German Government has relinquished the State’s authority to fully decide the answer to these questions. The jurisdiction of the State in matters of communication regulation has, to a certain extent, been privatised.

  • Censorship standards

Recently, an investigative journalism platform called ProPublica claimed possession of documents purported to be internal censorship guidelines used at Facebook. The unverified guidelines instructed employees to remove the phrase ‘migrants are filth’ but permit ‘migrants are filthy’. Whether the documents are legitimate is to some extent irrelevant: the documents provide a useful example of the specificity required where the aim is to guide one person’s interpretation of language toward a specific end – in this instance toward a correct judgment of legality or illegality.

Regardless of the degree of specificity, it is impossible for any formulation of guidelines to cover every possible manifestation of hate speech. Thereby interpreting reported content will necessarily require some degree of discretion. This necessity begs the question: to what extent will affording private entities discretionary powers of censorship impede freedoms of communication? Particularly where the discretion afforded is conditioned by financial risk and a determination is required within a 24-hour period.

  • Facebook’s position

Statements made by Facebook prior to the legislation’s enactment expressed concern for the effect the Act will have on the already complex issue of content moderation. ‘The draft law provides an incentive to delete content that is not clearly illegal when social networks face such a disproportionate threat of fine’ a statement noted. ‘(The Act) would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies’. Facebook’s reservation is telling: the company’s reluctance to adopt the role of moderator to the extent required alludes to the potential consequences of the liability imposed by the Act. 

The problem with imposing this form of liability

 Any decision made by a social media platform to censor user content will be supported by the anti-discrimination principles prescribed by the NetzDG. However, where the motivation behind discretionary decision-making shifts away from social utility towards financial management the guiding considerations become efficiency and risk minimisation. Efficiency and risk minimisation in this instance requires Facebook to either (i) increase capacity, which in turn results in an increased financial burden, or (ii) adopt guidelines that minimise exposure.

Seemingly the approach adopted by Facebook is to increase capacity. However, Facebook’s concerns that the Act creates financial incentives to adopt guidelines that minimise exposure are significant. Such concerns demonstrate an understanding that requiring profit motivated companies to do the work of the State within a 24-hour time frame will necessarily require a different set of parameters than those imposed on the regulation of oral hate speech. If Facebook, in drafting and applying those parameters, decides to err on the side of caution and, in some instances, censor otherwise legal content, that decision will have directly infringed the freedom of communication enjoyed by German citizens.

A democracy must be able to accommodate contrasting opinions if it purports to respect rights of communication and expression. Conversely, limitations on rights enjoyed may be justified if they benefit the majority. The NetzDG is Germany’s recognition that the nature of online communication – the speed at which ideas promulgate and proliferate, and the disconnect between comment and consequence created by online anonymity – require the existing limitations on the freedom of communication be adapted. Whether instances of infringement, are warranted in the current climate is a difficult and complicated extension of the debate between the utility of regulating hate speech and the corresponding consequences for the freedoms of communication and expression. The decision to pass the NetzDG suggests the German Government considers the risk of infringement is acceptable when measured against the consequences of unfettered hate speech.

Public recognition that NetzDG poses a risk is important. It is best practice that within a democracy, any new limit to liberty, oral or otherwise, be questioned and a justification given. Here the justification seems well-founded. However the answers to the questions posed by sceptics may prove telling as Germany positions itself at the forefront of the debate over online censorship.

(William is a student at the University of Melbourne and is currently interning at CCG)

Streaming platforms and self-censorship: An Indian perspective

Introduction

In May 2017, a movie titled ‘Angry Indian Goddesses’ was released on Netflix India. A censored version of the film, originally intended for theatrical release was made available. Critics brought attention to the self-censorship Netflix was resorting to, in the absence of censorship guidelines for streaming platforms. While theatrical releases are regulated by the Central Board of Film Certification, their jurisdiction does not extend to online platforms, as was recently made evident through an RTI response from the Ministry of Information and Broadcasting. Eventually, the director of ‘Angry Indian Goddesses’ informed viewers that Netflix had insisted on making the censored version available themselves.

Other platforms like Amazon Prime and Hotstar also indulge in the precarious practice of ‘self-censorship’. As per the law, films meant for theatrical release are certified by the CBFC. Through the process of certification, the CBFC has the power to request edits to the film. However, there is no legal stipulation for streaming services to censor content as the CBFC would. In some instances, documentaries, which were not intended for theatrical release in India, were available on streaming platforms in their censored forms. This post will navigate this phenomena of self-censorship.

What is the applicable law?  

Prior to the RTI response by the Ministry of Information and Broadcasting, there has been speculation over whether streaming platforms are Internet Protocol Television services (IPTV). IPTVs in India are bound by the Cable Television Networks (Regulation) Act, 1995, and need a license provided by the Department of Telecommunications to function. However, streaming services are considered to be over-the-top (OTT) services, and are not bound by the same regulations.

The status of streaming platforms has been considered by the judiciary as well. In 2016, a petition was filed in the Delhi High Court stating that the online streaming service Hotstar had made ‘soft pornographic’ content available on their platform. The petition stated that Hotstar, as an IPTV service, was in contravention of the downlinking guidelines. In response, Hotstar debated their status as an IPTV service and also categorically stated that they did not host any content that could be considered to be ‘soft pornography’.

This case has not made any progress since 2016, and there seems to be no judicial consensus on the status of streaming platforms as IPTV service providers.

In a recent judgment titled Raksha Jyoti Foundation vs. Union of India, the Punjab and Haryana High Court made references to an affidavit filed by the CBFC which would ensure that deleted parts of a film are not further released by other means. This would be carried out through undertakings, which the directors/producers would be held to. This system would effectively ensure that uncensored films are not made available on streaming platforms. It is unclear what the current position of this censorship procedure is, but if carried out, it would be in conflict with the RTI response.

Platform specific guidelines

Platforms like Netflix haven’t published censorship guidelines of their own, but they do have separate ‘maturity ratings’ according to country and region. The CEO of Netflix has also stated that they would have ‘airplane cuts’ of movies for different regions, stating that ‘entertainment companies have to make compromises over time’.

Why self-censorship?

Despite the absence of censorship laws applicable to streaming platforms, there are still other laws applicable to these platforms in India. As mentioned above, the downlinking guidelines were one such set of rules which were considered applicable. In addition, statutes like the Information Technology Act, 2000 and the Indian Penal Code, 1860 would also be applicable. It could be the case that streaming platforms are censoring content to ensure that they are in compliance with other statutes.

There is also a possibility that international services like Netflix and Amazon Prime are trying to find their place in the Indian market without drawing attention for the wrong reasons. Amazon for instance has publicly stated that they intend to keep in mind ‘Indian cultural sensitivities’ while making content available. In addition, platforms like Hotstar are run by parent companies like Star India, with ancillary business interests that they would be interested in protecting.

Conclusion

Unexpectedly, streaming platforms, which were meant to be avenues of free media in an age of heavily regulated television content, are following the same route as traditional media outlets.

This trend of self-censorship on streaming websites is similar to other internet platforms, who resort to self-censorship to avoid legal trouble. The tendency to ‘err on the side of caution’ is similar to platforms adhering to the intermediary liability laws in India. This form of tip-toeing around issues of regulation has led to a chilling effect on other internet platforms and could also lead to ‘over-censorship’ on streaming websites.

It is disconcerting that streaming websites are censoring content in the absence of laws, and leaves us speculating about the state of freedom of expression once censorship laws are in place.

How (not) to get away with murder: Reviewing Facebook’s live streaming guidelines

Introduction

The recent shooting in Cleveland live streamed on Facebook has brought the social media company’s regulatory responsibilities into question. Since the launch of Facebook Live in 2016, the service’s role in raising political awareness has been acknowledged. However, the service has also been used to broadcast several instances of graphic violence.

The streaming of violent content (including instances of suicide, murders and gang rapes) has raised serious questions about Facebook’s responsibility as an intermediary. While it is not technically feasible for Facebook to review all live videos while they’re being streamed or filter them before they’re streamed, the platform does have a routine procedure in place to take down such content. This post will visit the guidelines in place to take down live streamed content and discuss alternatives to the existing reporting mechanism.

What guidelines are in place?

Facebook has ‘community standards’ in place.  However, their internal regulation methods are unknown to the public. Live videos have to be in compliance with ‘community standards’, which specifies that Facebook will remove content relating to ‘direct threats’, self-injury’, ‘dangerous organizations’, ‘bullying and harassment’, ‘attacks on public figures’, ‘criminal activity’ and ‘sexual violence and exploitation’.

The company has stated that it ‘only takes one report for something to be reviewed’.  This system of review has been criticized since graphic content could go unnoticed without a report. In addition, this form of reporting would be unsuccessful since there is no mandate of ‘compulsory reporting’ for the viewers.  Incidentally, the Cleveland shooting video was not detected by Facebook until it was flagged as ‘offensive’, which was a couple of hours after the incident. The company has also stated that they are working on developing ‘artificial intelligence’ that could help put an end to these broadcasts. However, they currently rely on the reporting mechanism, where ‘thousands of people around the world’ review posts that have been reported against. The reviewers check if the content goes against the ‘community standards’ and ‘prioritize videos with serious safety implications’.

While deciding if a video should be taken down, the reviewers will also take the ‘context and degree’ of the content into consideration. For instance, content that is aimed at ‘raising awareness’, even if it displays violence, will be allowed. However, content that is celebrating such violence would be taken down. To demonstrate, when a live video of civilian Philando Castile being shot by a police officer in Minnesota went viral, Facebook kept the video up on their platform, stating that it did not glorify the violent act.

 Regulation

Other than the internal guidelines by which Facebook regulates itself, there haven’t been instances of government regulators, like the United States’ Federal Communications Commission intervening. Unlike the realm of television, where the FCC regulates content and deems material ‘inappropriate’, social media websites are protected from content regulation.

This brings up the question of intermediary liability and Facebook’s liability for hosting graphic content. Under American Law, there is a distinction between ‘publishers’ and ‘common carriers’. A common carrier only ‘enables communications’ and does not ‘publish content’. If a platform edits content, it is most likely a publisher. A ‘publisher’ has a higher level of responsibility for content hosted on their platform, unlike a ‘carrier’. In most instances, social media companies are covered under Section 230 of the Communications Decency Act, a safe harbor provision, by which they would not be held liable for third-party content.  However, questions have been raised about Facebook’s role as a ‘publisher’ or ‘common carrier’, and there seems to be no conclusive answer.

Conclusion

Several experts have considered possible solutions to this growing problem. Some believe that such features should be limited to certain partners and should be opened up to the public once additional safeguards and better artificial intelligence technologies are in place. In these precarious situations, enforcing stricter laws on intermediaries might not resolve the issue at hand. Some jurisdictions have ‘mandatory reporting’ provisions, specifically for crimes of sexual assault. In India, under Section 19 of the Protection of Children from Sexual Offences Act, 2012 ‘any person who has apprehension that an offence…is likely to be committed or has knowledge that such an offence has been committed’ has to report such an offence. In the context of cyber-crimes, this system of ‘mandatory reporting’ would shift the onus on the viewers and supplement the existing reporting system. Mandatory provisions of this nature do not exist in the United States where most of the larger social media companies are based.

Similarly, possible solutions should focus on strengthening the existing reporting system, rather than holding social media platforms liable.

Reviewing the Law Commission’s latest hate speech recommendations

Introduction

The Law Commission has recently released a report on hate speech laws in India. The Supreme Court in Pravasi Bhalai vs. Union of India  asked the Law Commission to recommend changes to existing hate speech laws, and to “define the term hate speech”. The report discusses the history of hate speech jurisprudence in India and in certain other jurisdictions. In addition, it stresses upon the difficulty of defining hate speech and the lack of a concise definition. In the absence of such a definition, certain ‘identifying criterion’ have been mentioned, to detect instances of hate speech. It also discusses the theories of Jeremy Waldron (the ‘dignity’ principle) and makes a case for protecting the interests of minority communities by regulating speech. In this regard, two new sections for the IPC have been proposed. They are as follows:

(i) Prohibiting incitement to hatred-

“153 C. Whoever on grounds of religion, race, caste or community, sex, gender identity, sexual orientation, place of birth, residence, language, disability or tribe –

(a)  uses gravely threatening words either spoken or written, signs, visible representations within the hearing or sight of a person with the intention to cause, fear or alarm; or

(b)  advocates hatred by words either spoken or written, signs, visible representations, that causes incitement to violence shall be punishable with imprisonment of either description for a term which may extend to two years, and fine up to Rs 5000, or with both.”.

(ii) Causing fear, alarm, or provocation of violence in certain cases.

“505 A. Whoever in public intentionally on grounds of religion, race, caste or community, sex, gender, sexual orientation, place of birth, residence, language, disability or tribe-

uses words, or displays any writing, sign, or other visible representation which is gravely threatening, or derogatory;

(i) within the hearing or sight of a person, causing fear or alarm, or;

(ii) with the intent to provoke the use of unlawful violence,

against that person or another, shall be punished with imprisonment for a term which may extend to one year and/or fine up to Rs 5000, or both”.

The author is of the opinion that these recommended amendments are vague and broadly worded and could lead to a chilling effect and over-censorship. Here are a few reasons why the recommendations might not be compatible with free speech jurisprudence:

  1. Three – part test

Article 10 of the European Convention on Human Rights lays down three requirements that need be fulfilled to ensure that a restriction on free speech is warranted. The Law Commission report also discusses this test; it includes the necessity of a measure being ‘prescribed by law’, the need for a ‘legitimate aim’ and the test of ‘necessity and proportionality’.

Under the ‘prescribed by law’ standard, it is necessary for a restriction on free speech to be ‘clear and not ambiguous’. For instance, a phrase like ‘fear or alarm’ (existing in Section 153A and Section 505) has been criticized for being ‘vague’. Without defining or restricting this term, the public would not be aware of what constitutes ‘fear or alarm’ and would not know how to comply with the law. This standard has also been reiterated in Shreya Singhal vs. Union of India, where it was held that the ambiguously worded Section 66A could be problematic for innocent people since they would not be aware as to “which side of the line they fall” towards.

  1. Expanding scope to online offences?

The newly proposed sections also mention that any ‘gravely threatening words within the hearing or sight of a person’ would be penalized. Presumably, the phrase ‘within the sight or hearing of a person’ broadens the scope of this provision and could allow online speech to come under the ambit of the IPC. This phrase is similar to the wording of Section 5 (1) of the Criminal Justice (Public Order) Act, 1986[1] in the United Kingdom, which penalizes “harassment, alarm or distress”. Even though the section does not explicitly mention that it would cover offences on the internet, it has been presumed to do so.[2]

Similarly, if the intent of the framers of Section 153C is to expand the scope to cover online offences, it might introduce the same issues as the omitted Section 66A of the IT Act did. Section 66A intended to penalize the transmission of information which was ‘menacing’ and also which promoted ‘hatred or ill will’. The over-breadth of the terms in the section led to scrapping it. Another reason for scrapping the section was the lowering of the ‘incitement’ threshold (discussed below). Even though the proposed Section 153C does not provide for as many grounds (hatred, ill will, annoyance, etc.), it does explicitly lower the threshold from ‘incitement’ to ‘fear or alarm’/’discrimination’.

  1. The standard of ‘hate speech’

 The report also advocates for penalizing the ‘fear or alarm’ caused by such speech, since it could potentially have the effect of ‘marginalizing a section of the society’. As mentioned above, it has been explicitly mentioned that the threshold of ‘incitement to violence’ should be lowered and factors like ‘incitement to discrimination’ should also be considered.

The Shreya Singhal judgment drew a distinction between ‘discussion, advocacy and incitement’, stating that a restriction justifiable under Article 19(1) (a) of the Constitution would have to amount to ‘incitement’ and not merely ‘discussion’ or ‘advocacy’. This distinction was drawn so that discussing or advocating ideas which could lead to problems with ‘public order’ or disturbing the ‘security of the state’ could be differentiated from ‘incitement’ which establishes more of a ‘causal connection’.

Similarly, if the words used contribute to causing ‘fear or alarm’, the threshold of ‘incitement’ would be lowered, and constitutionally protected speech could be censored.

Conclusion

Despite the shortcomings mentioned above, the report is positive in a few ways. It draws attention to important contemporary issues affecting minority communities and how speech is often used to mobilize communities against each other. It also relies on Jeremy Waldron’s ‘dignity principle’ to make a case for imposing differing hate speech standards to protect minority communities. In addition, the grounds for discrimination now include ‘tribe’ and ‘sexual orientation’ amongst others.

However, existing case laws, coupled with recent instances of censorship, could make the insertion of these provisions troubling. India’s relationship with free speech is already dire; the Press Freedom Index ranks the country at 133 (out of 180) and the Freedom on the Net Report states that India is ‘partly free’ in this regard. The Law Commission might need to reconsider the recommendations, for the sake of upholding free speech. Pravasi Bhalai called for sanctioning politicians speeches, but the recommendations made by the Law Commission might be far reaching and the effects could be chilling.

 

[1] Section 5- Harassment, alarm or distress.
(1)A person is guilty of an offence if he—
(a)uses threatening or abusive words or behaviour, or disorderly behaviour, or
(b)displays any writing, sign or other visible representation which is threatening or abusive,
within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby.

[2] David Wall, Cybercrime: The Transformation of Crime in the Information Age, Page 123, Polity.