How (not) to get away with murder: Reviewing Facebook’s live streaming guidelines

Introduction

The recent shooting in Cleveland live streamed on Facebook has brought the social media company’s regulatory responsibilities into question. Since the launch of Facebook Live in 2016, the service’s role in raising political awareness has been acknowledged. However, the service has also been used to broadcast several instances of graphic violence.

The streaming of violent content (including instances of suicide, murders and gang rapes) has raised serious questions about Facebook’s responsibility as an intermediary. While it is not technically feasible for Facebook to review all live videos while they’re being streamed or filter them before they’re streamed, the platform does have a routine procedure in place to take down such content. This post will visit the guidelines in place to take down live streamed content and discuss alternatives to the existing reporting mechanism.

What guidelines are in place?

Facebook has ‘community standards’ in place.  However, their internal regulation methods are unknown to the public. Live videos have to be in compliance with ‘community standards’, which specifies that Facebook will remove content relating to ‘direct threats’, self-injury’, ‘dangerous organizations’, ‘bullying and harassment’, ‘attacks on public figures’, ‘criminal activity’ and ‘sexual violence and exploitation’.

The company has stated that it ‘only takes one report for something to be reviewed’.  This system of review has been criticized since graphic content could go unnoticed without a report. In addition, this form of reporting would be unsuccessful since there is no mandate of ‘compulsory reporting’ for the viewers.  Incidentally, the Cleveland shooting video was not detected by Facebook until it was flagged as ‘offensive’, which was a couple of hours after the incident. The company has also stated that they are working on developing ‘artificial intelligence’ that could help put an end to these broadcasts. However, they currently rely on the reporting mechanism, where ‘thousands of people around the world’ review posts that have been reported against. The reviewers check if the content goes against the ‘community standards’ and ‘prioritize videos with serious safety implications’.

While deciding if a video should be taken down, the reviewers will also take the ‘context and degree’ of the content into consideration. For instance, content that is aimed at ‘raising awareness’, even if it displays violence, will be allowed. However, content that is celebrating such violence would be taken down. To demonstrate, when a live video of civilian Philando Castile being shot by a police officer in Minnesota went viral, Facebook kept the video up on their platform, stating that it did not glorify the violent act.

 Regulation

Other than the internal guidelines by which Facebook regulates itself, there haven’t been instances of government regulators, like the United States’ Federal Communications Commission intervening. Unlike the realm of television, where the FCC regulates content and deems material ‘inappropriate’, social media websites are protected from content regulation.

This brings up the question of intermediary liability and Facebook’s liability for hosting graphic content. Under American Law, there is a distinction between ‘publishers’ and ‘common carriers’. A common carrier only ‘enables communications’ and does not ‘publish content’. If a platform edits content, it is most likely a publisher. A ‘publisher’ has a higher level of responsibility for content hosted on their platform, unlike a ‘carrier’. In most instances, social media companies are covered under Section 230 of the Communications Decency Act, a safe harbor provision, by which they would not be held liable for third-party content.  However, questions have been raised about Facebook’s role as a ‘publisher’ or ‘common carrier’, and there seems to be no conclusive answer.

Conclusion

Several experts have considered possible solutions to this growing problem. Some believe that such features should be limited to certain partners and should be opened up to the public once additional safeguards and better artificial intelligence technologies are in place. In these precarious situations, enforcing stricter laws on intermediaries might not resolve the issue at hand. Some jurisdictions have ‘mandatory reporting’ provisions, specifically for crimes of sexual assault. In India, under Section 19 of the Protection of Children from Sexual Offences Act, 2012 ‘any person who has apprehension that an offence…is likely to be committed or has knowledge that such an offence has been committed’ has to report such an offence. In the context of cyber-crimes, this system of ‘mandatory reporting’ would shift the onus on the viewers and supplement the existing reporting system. Mandatory provisions of this nature do not exist in the United States where most of the larger social media companies are based.

Similarly, possible solutions should focus on strengthening the existing reporting system, rather than holding social media platforms liable.

Response to Online Extremism: Beyond India

In our previous posts, we traced the Indian response to online extremism as well as the alternate regulatory methods adopted worldwide to counter extremist narratives spread via the internet. At the international level, the United Nations has emphasised upon the need to counter extremists who use the internet for propaganda and recruitment. This post explores the responses of three countries – UK, France and USA – that have often been the target of extremism. While strategies to counter extremism form part of larger counter-terror programmes, this post focuses on some measures adopted by these States that target online extremism specifically.

United Kingdom

In 2011, the UK adopted a ‘prevent strategy’ which seeks to ‘respond to the ideological challenge’ posed by terrorism and ‘prevent people from being drawn into terrorism’. This strategy seeks to counter ‘extremism’ which is defined as:

“vocal or active opposition to fundamental British values, including democracy, the rule of law, individual liberty and mutual respect and tolerance of different faiths and beliefs. We also include in our definition of extremism calls for the death of members of our armed forces”.

This definition has been criticised as being over-broad and vague, which can potentially ‘clamp-down on free expression’. In 2013, the Prime Minister’s Task Force on Tackling Radicalisation and Extremism (“Task Force”) submitted its report identifying the critical issues in tackling extremism and suggesting steps for the future. The Task Force recommended that the response to extremism must not be limited to dealing with those who promote violence – rather, it must target the ideologies that lead individuals to extremism. The report highlighted the need to counter extremist narratives, especially online. Some of its recommendations include building capabilities, working with Internet companies to restrict access to such material, improving the process for public reporting of such content and including extremism as a filter for content accessed online. The report also recommended the promoting of community integration and suggested steps to prevent the spread of extremist narratives in schools and institutions of higher education. While suggesting these methods, the report reaffirmed that the proposals are not designed to ‘restrict lawful comment or debate’.

A number of recommendations made by the Task Force have been adopted in the UK subsequently. For instance, the UK Government has set up a mechanism by which individuals can anonymously report online material promoting terrorism or extremism. Universities and colleges became legally bound to put in place policies to prevent extremist radicalization on campuses in 2015. Further, local authorities, the health sector, prisons and the police have all been accorded duties to aid in the fight against extremism.

UK is also considering a Counter-Extremism and Safeguarding Bill (the “Bill”) which proposes to bring in tougher counter extremism measures. The Bill empowers certain authorities to ban extremist groups, disrupt individuals engaging in extremist behaviour and close down premises that support extremism. However, the Bill has been criticised extensively by the Parliament’s Joint Committee on Human Rights. The Committee identified gaps such as the failure to adequately define core issues like ‘non-violent extremism’ and the use of measures like ‘banning orders’ which are over-broad and susceptible to misuse.

France

Reports reveal that France has become the largest source of Western fighters for the Islamic State and nearly 9000 radicalized individuals are currently residing in France. Over the last few years, France has also witnessed a series of terrorist attacks, which has resulted in bolstering of the counter-terrorism and counter-extremism measures by the country.

In November 2014, the French parliament passed an anti-terror legislation that permits the government to block websites that ‘glorify terrorism’ and censor speech that is deemed to be an ‘apology for terrorism’, among other measures. A circular released in January 2015 explains that “apology for terrorism” refers to acts which present or comment on instances of terrorism “while basing a favourable moral judgement on the same”.  In 2015, France blocked five websites, in one of the first instances of censoring anti-jihadist content. Since then, France has continued to censor online speech for the broad offence of ‘apology for terrorism’ with harsh penalties. It has been reported that nearly 87 websites were blocked between January to November 2015; and more than 700 people have been arrested under this new offence of ‘apology for terrorism’. The offence has been criticised for being vague, resulting in frequent prosecution of legitimate speech that does not constitute incitement to violence. In May 2015, another law was passed strengthening the surveillance powers of the State requiring Internet Service Providers to give unfettered access to intelligence agencies. This statute empowers authorities to order immediate handover of user data without prior court approvals. These legislations have been criticised for being over-broad and incorporating measures that are unnecessary and excessive.

In addition to these measures, France also launched an anti-Jihadism campaign in 2015 which seeks to counter extremism and radicalization throughout the society, specifically focusing on schools and prisons.

United States

The principle institution that develops counter-extremism strategies in the USA is the Bureau of Counterterrorism and Countering Violent Extremism. The bureau has developed a Department of State & USAID Joint Strategy on Countering Violent Extremism. The strategy aims to counter efforts by extremist to radicalize, recruit and mobilize followers to violence. To pursue this aim, the strategy incorporates measures like enhanced bilateral and multilateral diplomacy, strengthening of the criminal justice system and increased engagement with different sectors like prisons, educational institutions and civil society. Promoting alternate narratives is a key component of the counter-extremism programme of the bureau.  However, it is important to note that this strategy has also been criticised for revealing very few details about what it entails, despite extensive budget allocations. A lawsuit has been filed under the Freedom of Information Act claiming that authorities have denied revealing information about this programme. Organisations fear that the initiatives under the programme have the potential of criminalizing legitimate speech and targeting certain communities.

Conclusion

State responses towards extremism have increased substantially in the past few years with new programmes and measures being put in place to counter these narratives in the fight against terrorism. While the measures adopted differ from state to state – some strategies like promoting de-radicalisation in educational institutions and prisons are commonly present. At the same time, some of the measures adopted threaten to impact freedom of speech due to vague definitions and over-broad responses. It is critical for authorities to strike a balance between countering extremist narratives and preserving free thought and debate, more so in institutions of learning. Consequently, measures to counter extremist narratives must be specific and narrowly tailored with sufficient safeguards in order to balance the right to security with civil liberties of individuals.

John Doe orders: The Balancing Act between Over-Blocking and Curbing Online Piracy

The Bombay High court recently passed a John Doe order laying down a set of safeguards to minimise over-blocking. The Delhi High Court on the other hand ordered blocking of 73 websites for showing “substantial” pirated content. This blog post traces the history of John Doe orders in India, their impact on free speech and evaluates the recent developments in this area.

John Doe Orders and their Impact on Freedom of Speech

John Doe or Ashok Kumar orders usually refer to ex-parte interim injunctions issued against defendants, some of who may be unknown or unidentified at the time of obtaining the order. Well recognised in commonwealth countries, this concept was imported to India in 2002 by an order passed against unknown cable operators to give relief to a TV channel in the case of Taj Television v. Rajan Mandal. The trend to issue John Doe orders to prevent piracy picked up pace in 2011 when the Delhi High Court passed a series of such orders. Since then, a stream of such orders has been passed authorising copyright holders to take action against unknown persons for violation of their right against piracy (in the future) without moving to the court again. The orders authorise copyright holders to intimate ISPs to take down the allegedly violating content. In 2012, the Madras High Court clarified an earlier order (which had resulted in blocking of a number of websites) stating that it pertained only to specific URLs and not websites. Despite this, John Doe orders for blocking of websites are common place.

John Doe orders are passed as ex-parte orders due to paucity of time and the difficulties in identifying defendants in such cases. However, these orders threaten to impair freedom of speech online due to a host of problems. First, these orders are given on the basis of a mere ‘possibility’ of piracy with no requirement to establish piracy before the court post blocking. This paves the way for negligence or misuse of the power to take down content by copyright holders. Second, these orders are usually passed on a minimal standard of evidence on the word of the plaintiff without sufficient scrutiny by the court of the URLs/websites submitted. Third, they do not require the copyright holders or ISPs to inform the reasons for blocking to the persons whose content is taken down – leaving almost no recourse to those whose website/URL may be blocked mistakenly. Fourth, the burden of carrying out these orders falls on the ISPs who block the websites/URLs erring on the side of caution.

The absence of scrutiny and the lack of safeguards lead to over-blocking of content. Users who suffer as a result of these over-broad orders often lack the knowledge or means to overturn these orders resulting in the loss of legal and legitimate speech online. Further, without any requirement for reaffirmation of the blocks from the court –private parties (the copyright holders) themselves become adjudicators of copyright violations hampering the rights of users affected by these orders.

Instances of over-blocking as a result of these orders are many. In May 2012, as a result of an order by the Madras High Court a range of websites were blocked including legitimate content on video sharing sites like Vimeo. In 2014, the Delhi High court had issued a John Doe order mandating blocking of 472 websites including Google documents in wake of the FIFA world cup. Many questioned such widespread blocking under the mere assumption that the websites would support pirated screening of the FIFA world cup especially without verification by the courts. The order was later tailored down.

The Bombay High Court Order

The jurisprudence regarding John Doe orders saw a shift when Justice Patel from the Bombay High Court took a huge step forward with his order dated July 26 2016 for the movie Dishoom. This order recognises both the harms of piracy and the adverse impact of John Doe orders on unknown defendants as it attempts to balance the ‘competing rights’. The order lays down a multi-tier process to minimise the negative impacts of John Doe orders and tailors down blocking from entire websites (except in certain conditions) to URLs (see chart below).

bombay high court final

The order sets in place a mechanism that provides for selective blocking of content, verification of the list of URLs as well as safeguards for the unknown defendants.  Such a mechanism helps ensure that freedom of speech online is not trampled in the fight against online piracy.

The Delhi High Court Judgement

The Delhi High Court in its judgement (not a John Doe order) dated July 29 2016 blocked 73 websites in a case regarding live streaming of pirated videos of cricket matches. While some lauded this judgement for its contribution to India’s fight against piracy, it is important to understand its many failings.

The high court blocked the websites on a ‘prima facie’ view of the material placed before it by the plaintiffs that the websites were entirely or to a large extent carrying out piracy. However, it remains unclear as to what standard was used by the court to determine the extent of piracy. Further, complete blocking of websites would encroach upon the right to carry on business and freedom of expression of the other party therefore the standard for placing such a restriction must be high and well defined. This was recognised by Justice Patel in the Bombay High Court order – where the court clarified that while there is no prohibition on blocking of entire website there is a need for a ‘most comprehensive audit or review reasonably possible’ to establish that the website contains ‘only’ illicit content.

Even though the judgment of the Delhi High Court is against 73 named defendants, the order was passed merely on a prima facie review of material laid before it which displays a lack of system for verification. A mere prima facie review of material is insufficient as a third party whose website is mistakenly blocked would suffer unnecessarily. With no directions to put out a notice of information to the defendants – he/she may not even be aware. Thus, the Delhi High Court order suffers from various problems that pave way for over-blocking of content. The judgement also places an unfair burden on the government and raises questions regarding the role of the intermediary– which has been articulated in detail in a post by Spicy IP here.

Questions for the Future

These developments raise a number of issues for the future – the most prominent being the need to reconcile the differing legal developments on the issue of online piracy across India. Further, there is a need for the courts to be more sensitive to the plight of unknown third parties while passing John Doe orders following the lead of Bombay High Court.

Two particular issues come to light amongst this mess. First, the need to develop a standard while blocking of complete websites that is sufficiently high to prevent misuse and over-blocking. Second, to develop a neutral body that can verify the lists for blocking on behalf of the courts – that ensures sufficient checks in the system while keeping in mind the paucity of time in these cases. However, any such body must possess the technical know-how to understand how these lists are put together and other related issues. The lag between technology and law is very real- as correctly pointed out by Justice Patel- and these small steps will go a long way in bridging these gaps.

(For some more reading on this issue you can look at a piece published in the Mint here and for a more detailed reading on the Bombay High Court order you can read the Spicy IP post here)

Internet Shutdowns: An Update

At the time of posting, mobile internet services continue to remain suspended in parts of Jammu & Kashmir for the sixth consecutive day. The shutdown was enforced in response to the tense law and order situation prevailing in the Kashmir valley following the death of Burhan Wani, a top commander in the terrorist outfit Hizbul Mujahideen.

This shutdown, already the fourteenth this year in India, comes on the heels of the adoption of a resolution by the UN Human Rights Council (UNHRC) on the “promotion, protection and enjoyment of human rights on the internet”. Although the resolution stops short of recognizing access to the internet as a human right, it affirms that human rights exercised offline should also be protected online. The UN HRC had previously resolved to protect human rights online in its 2012 and 2014 sessions, but this resolution marks a significant improvement as it specifically comments on the hitherto unaddressed issue of internet shutdowns. It condemns measures that “disrupt access to or dissemination of information online, in violation of international human rights law” and calls on states to refrain from such measures. The resolution is timely as it comes at a juncture when governments worldwide are, at an increasing frequency, adopting various strategies to shut down the internet. The internet has been shut down by governments to counter problems ranging from civil unrest or uprisings such as in Zimbabwe most recently and even to prevent cheating in exams such as in Gujarat earlier this year.

Contrary to media reports that India had voted against the resolution, India voted in favour of three amendments to the resolution mooted by Russia and China. While commentators have been divided on whether on these amendments are antithetical to the spirit of the resolution, India did vote on an amendment to weaken the emphasis on the “human rights-based approach” conceived of originally in the resolution.   The cruel irony of this is amplified in the context of the questionable human rights record of the armed forces and the police in Jammu and Kashmir- which has experienced the highest number of shutdowns in the country.

In our previous posts, we had argued that the implementation of shutdowns under Section 144 of the Code of Criminal Procedure, 1973 suffered from fatal over-breadth and was constitutionally unviable. In practical terms, the hazards of implementing a widespread internet shutdown simply cannot be understated. The suspension of the internet, especially in situations of riot or violence, becomes especially problematic for citizens. As reported in Jammu and Kashmir, the lack of reliable information through communication channels has contributed to the perpetration of rumours and the worsening of the situation in many parts of the valley. The communication breakdown has adversely affected the provision of much-needed health and emergency services in addition to disrupting trade and commerce significantly. The collateral damage of internet shutdowns becomes especially relevant when considered against the prism of the Government’s stated mission in endorsing programs like Digital India and the Smart Cities mission which will rely substantially on the internet for smooth functioning and delivery of services. With the mechanics of everyday life being increasing intertwined in the internet, it is essential to ask the question whether the internet should be shut down without procedural transparency.

As previously stated, India is not alone in implementing internet and communication network shutdowns of this nature. Not surprisingly, even in jurisdictions with a strong tradition of respect for free speech, executive procedures relating to shutting down the internet and other communication services at the government’s instance remain shrouded in secrecy. In the US, for instance, a policy known as the Standard Operating Procedure 303 allows for the shutdown of cell-phone services anywhere in the country in the event of a crisis situation. As in India, on account of the lack of transparency and accountability, activists fear that the power may be abused. A petition that sought more information on the protocol was declined by the Supreme Court of the United States.  In the UK too, a localized mobile network shutdown implemented by the City of London Police following the terrorist bombings in London came in for heavy criticism, having affected over a million individuals’ communications. A review committee found that the protocol needed to be reviewed and restructured to provide for adequate and effective procedures to follow.

Additionally, the conversation on internet shutdowns is also increasingly focused on the prospect of shutting down the internet in the event of a cyber attack. In the UK, for instance, specific legislations enable Government ordered suspension of the internet to bring about “web Armageddon”. In India, the debates and discourse around internet shutdown are nascent yet- but will only acquire increasing significance. The Government is in the process of considering amendments to the Information Technology Act, 2000 to ramp up cyber security provisions. As we progress toward systems that are completely digitized, the likelihood of cyber-attacks will only increase- which then begs the question of whether the Government can choose to shut down the internet and what procedures it is bound by in doing so.

The internet is a great enabler of democracy – having greatly lowered the hurdles to free speech and assembly. Any attempts at shutting down the internet must necessarily be accompanied by structured efforts to avoid the arbitrary exercise of such power. The imminent threat of an Emergency-like situation gagging the internet may seem alarmist at the moment- but there certainly needs to be an active and concerted effort to examine the legality and necessity of shutdowns while putting in place strict procedural standards.

Google de-platforms Taliban Android app: Speech and Competition implications?

Written by Siddharth Manohar

About a few weeks ago, Google pulled an app from its online application marketplace the Google Play Store, which was developed by the Taliban for propagating violently extremist views and spreading hateful content. Google has stated that its reason for doing this is that the app violated its policy for Google Play Store.

Google maintains a comprehensive policy statement for any app developer who wishes to upload an app for public consumption on the Play Store. The policy, apart from setting up a policy for the Play Store as a marketplace, also places certain substantive conditions on developers using the platform to reach users.

Amongst other restrictions, one head reads ‘Hate Speech’. It says:

We don’t allow the promotion of hatred toward groups of people based on their race or ethnic origin, religion, disability, gender, age, veteran status, or sexual orientation/gender identity.

Google found the Taliban app to violate this stipulation in the Play Store policy, as confirmed by a Google spokesperson, who said that the policies are “designed to provide a great experience for users and developers. That’s why we remove apps from Google Play that violate those policies.” The app was first detected by an online intelligence group which claims to monitor extremist content on social media. It was developed to increase access to the Taliban’s online presence by presenting content in the Pashto language, which is widely spoken in the Afghan region.

The application itself of course still being available for download on a number of other regular websites, the content of its material led to its removal from a marketplace. This is an interesting application of the restriction of hateful speech, because the underlying principle in Google’s policy itself pays heed to the understanding that development and sale of apps forms a kind of free speech.

A potentially interesting debate in this area is the extent to which decisions on the contours of permissible speech can be decided by a private entity on its public platform. The age-old debate about the permissible restrictions on speech can find expression in this particular “marketplace of ideas” of Google Play Store. On one hand, there is the concern of protecting users from harmful and hateful content, speech that targets and vilifies individuals based on some factor of their identity, be it race, gender, caste, colour, or sexual orientation. On the other hand, there will also ever be the concern that the monitoring of speech by the overseeing authority becomes excessive and censors certain kinds of opinions and perspectives from entering the mainstream.

This particular situation provides an easy example in the form of an application developed by an expressly terrorist organisation. It would however still be useful to keep an eye out in the future for the kind of applications that are brought under the ambit of such policies, and the principles justifying these policies.

The question of what, if any, kind of control can be exercised over this kind of editorial power of Google over its marketplace is also a relevant one. Google can no doubt justify its editorial powers in relatively simple terms – it has explicit ownership of the entire platform and can the basis on which to allow developers onto it. However, the Play Store forms an overwhelmingly large percentage of how users access any application on a daily basis. Therefore, Google’s policies on the Play Store have a significant impact on how and whether applications are accessed by users in the context of the entire marketplace of applications and users. The policy implications of this are that the principles of Google’s Play Store policies need to be placed under the scrutiny of how it impacts the entire app development ecosystem. This is evidenced by the fact that the European Commission about a year ago pulled up Google for competition concerns regarding its Android operating system, and has also recently communicated its list of objections to Google. The variety of speech and competition concerns applicable to this context make it one to watch closely for developments of any kind for further analysis.

comin2getUlol

Image Source: ‘mammela’, Pixabay.

Free Speech & Violent Extremism: Special Rapporteur on Terrorism Weighs in

Written by Nakul Nayak

Yesterday, the Human Rights Council came out with an advance unedited version of a report (A/HRC/31/65) of the Special Rapporteur on protection of human rights while countering terrorism. This report in particular deals with protecting human rights while preventing and countering violent extremism. The Special Rapporteur, Ben Emmerson, has made some interesting remarks on extremist speech and its position in the hierarchy of protected and unprotected speech.

First, it should be noted that the Report tries to grapple with and distinguish between the commonly substituted terms “extremism” and “terrorism”. Noting that violent extremism lacks a consistent definition across countries and in some instances any definition at all, the Report goes on to liken it to terrorism. He also acknowledges the lack of understanding of the “radicalization process”, whereby innocent individuals become violent extremists. While the report does not suggest an approach to defining either term, it briefly contrasts the definitions laid down in various countries. However, there does seem to be some consensus on the ambit of violent extremism being broader than terrorism and consisting a range of subversive activities.

The important section of the Report, from the perspective of free speech, deals with incitement to violent extremism and efforts to counter it. The Report cites UN Resolution 1624(2005) that calls for the need to adopt legislative measures as effective means of addressing incitement to terrorism. However, the Report insists on the existence of “serious human rights concerns linked to the criminalization of incitement, in particular around freedom of expression and the right to privacy.[1] The Report then goes on to quote the UN Secretary General and the Special Rapporteur on Free Expression laying down various safeguards to laws criminalizing incitement. In particular, these laws must prosecute incitement that is directly related to terrorism, has the intention and effect of promoting terrorism, and includes judicial recourse, among other things.[2]

This gives us an opporutnity to discuss the standards of free speech restrictions in India. While the Supreme Court has expressly imported the American speech-protective standard of incitement to imminent lawless action in Arup Bhuyan, confusion still persists over the applicable standard in any justifying any restriction to free speech. The Supreme Court’s outdated ‘tendency’ test that does not require an intimate connection between speech and action still finds place in today’s law reports. This is evident from the celebrated case of Shreya Singhal. After a lengthy analysis of the public order jurisprudence in India and advocating for a direct connection between speech and public disorder, Justice Nariman muddies the water by examining section 66A of the IT Act under the ‘tendency’ test. Some coherence in incitement standards is needed.

The next pertinent segment of the Report dealt specifically with the impact of State measures on the restriction of expression, especially online content. Interestingly, the Report suggests that “Governments should counter ideas they disagree with, but should not seek to prevent non-violent ideas and opinions from being discussed.[3] This brings to mind the recent proposal of the National Security Council Secretariat (NSCS) seeking to set up a National Media Analytics Centre (NMAC) to counter negative online narratives through press releases, briefings, and conferences. While nothing concrete has come out, with the proposal still in the pipelines, safeguards must be implemented to assuage chilling effect and privacy concerns. It may be noted here that the Report’s remarks are limited to countering speech that form an indispensible part of the “radicalization process”. However, the NMAC covers negative content across the online spectrum, with its only marker being the “intensity or standing of the post”.

An important paragraph of the report- perhaps the gist of the free speech perspective in the combat of violent extremism- is the visible unease in determining the position of extremist speech glorifying and advocating terrorism. The Report notes the Human Rights Committee’s stand that terms such as “glorifying” terrorism must be clearly defined to avoid unnecessary incursions on free speech. At the same time, the “Secretary General has deprecated the ‘troubling trend’ of criminalizing glorification of terrorism, considering it to be an inappropriate restriction on expression.[4]

These propositions are in stark contrast to India’s terror legislation, the Unlawful Activities Prevention Act, 1967. Section 13 punishes anyone who “advocates, … advises … the commission of any unlawful activity …” An unlawful activity has been defined in section 2(o) to include speech acts that

  • supports a claim of “secession of a part of the territory of India from the Union” or,
  • which disclaims, questions … the sovereignty and territorial integrity of India” or,
  • rather draconically, “which causes … disaffection against India.

It will also be noted that all three offences are content-based restrictions on free speech i.e. limitations based purely on the subjects that the words deal in. Textually, these laws do not necessarily require an examination of the intent of the speaker, the impact of the words on the audience, or indeed the context in which the words are used.

Finally, the Report notes the views of the Special Rapporteur on Free Expression on hate speech and characterizing most efforts to counter them as “misguided”. However, the Report also “recognizes the importance of not letting hate speech go unchecked …” In one sense, the Special Rapporteur expressly rejects American First Amendment jurisprudence, which does not acknowledge hate speech as a permissible restriction to free speech. At the same time, the Report’s insistence that “the underlying causes should also be addressed” instead of being satisfied with mere prosecutions is a policy aspiration that needs serious thought in India.

This Report on violent extremism (as distinct from terrorism) is much-needed and timely. The strong human rights concerns espoused, with its attendant importance attached to a context-driven approach in prosecuting speech acts, are a sobering reminder about the many inadequacies of Indian terror law and its respect for fundamental rights.

Nakul Nayak was a Fellow at the Centre for Communication Governance from 2015-16.

[1] Para 24.

[2] Para 24.

[3] Para 38.

[4] Para 39.

Anupam Kher’s Cockroach Tweet: Cultural Reference or Hate Speech?

Written by Siddharth Manohar

The noise surrounding the recent controversy regarding a tweet by Indian actor (and UN Ambassador for Gender Equality) Anupam Kher made it difficult to look into why it caught so much attention. That it did is beyond doubt, garnering over six thousand hits, significantly more than almost all of his other tweets. It was also followed by plenty of coverage and promotion from its audience, who responded while sharing their own views as well. Here I try to look at whether there was any basis for the criticism that the tweet received, and the degree to which it was justified.

To start off, it would be useful to reproduce the lines in their original form:

घरों में पेस्ट कंट्रोल होता है तो कॉक्रोच, कीड़े मकोड़े इत्यादि बाहर निकलते है घर साफ़ होता हैवैसे ही आजकल देश का पेस्ट कंट्रोल चल रहा है

Which translates into: “During pest control in houses, the cockroaches and other insects etc. are removed. The house gets cleaned. Similarly, pest control of the country is going on these days.”

On an initial reading, it is a harmless and vague insult. The use of the term ‘cockroach’, which has attracted the most attention, seems to be employed as a characterisation of anything undesirable, be they problems, politics, or people. As a standalone insult, it remains a lot less venomous as compared to some of the other material that one may find on the website. Apart from containing a reference to one of the actor’s films, it is also vague and targets no group explicitly. It is therefore understandable that the issue has its share of people who may be bewildered by what could possibly be quite so harmful in this particular tweet, and are likely to pass off criticism as an overreaction that seems to be increasingly common.

To understand if there is a valid criticism of the tweet, we look at the larger context in which such a term is understood. The comparing of groups of people to animals and pests has a long, concrete, and troubling history. The process has over time and study acquired the name of ‘dehumanisation’, the process by which language and discourse is used to make a group of people seem ‘less-than-human’. It is a widely documented and extremely effective method of incitement to violence.

The reasoning behind its usage in the process is also interesting and relevant. According to Helen Fein (Benesch, 2008), the purpose of this kind of discourse is to put a certain group of people outside the limits of moral considerations and obligations. This is because the default moral understanding of a majority of people is underpinned by the principle that it is unacceptable to carry out violent acts of hate, or to kill any person. The repeated categorisation of a group of people as the ‘other’, and the polarisation of their identity as a group not worthy of human respect or equal rights, has the effect on the mind of the larger public. Acts of violence and crimes start to seem more acceptable and less outrageous when committed against this group, and this process of dehumanisation escalates over time.

The narratives most often target a specific identity, most famously that of ethnicity and religious identity. The most prominent examples of this occur during the inter-war period in Germany, where there was a large amount of material alienating and dehumanising those of Jewish religion. The content was systematically churned out by state agencies instructed with an agenda. Similarly, the build-up to the Rwandan genocide in 1994 saw a very strong narrative which demonised the Tutsi ethnic group in Rwanda, labeling them as Inyenzi (cockroaches) that cannot contribute to society because of who they were, their basic identity. This narrative creates a larger feeling of resentment amongst the public against the people of the target group, making it easier to commit acts of violence against them. Susan Benesch would argue that there cannot in fact be a large scale violent attack against a group of people that live amongst a majority without the cooperation or the tacit acceptance of that larger group of people.

The comparison of people to pests and animals has repeatedly been used as a tool in this process of moulding public sentiment against certain groups of people. In these cases, the narrative that it served to created helped in the execution of large scale genocidal operations that have left millions of people killed over the decades. Dehumanisation has also been included as part of an academic study devising a ten-step model of genocide. The historical evidence is in overwhelming suggestion that the use of such terms to build a narrative is part of a larger build up towards organised violence based on lines of group identity.

To suggest that an Indian actor is sending out a call for violence is ill-thought out, and ignorant of the complexity of the issue. What does need to be observed however, is how easily discussions are used to create and divide identities, and what values are ascribed to these identities. While healthy and vociferous debate forms an important part of a democracy, also equally important is the tangible effect that speech can have on its immediate surroundings. It is the effects and the consequences (and harm) of speech that give rise to justifications for its regulation, and it is therefore always useful to keep a watchful eye on where public discourse takes us.

textspace_1457429885_be702766 (1)