Understanding the ‘NetzDG’: Privatised censorship under Germany’s new hate speech law

By William James Hargreaves

The Network Enforcement Act

The Network Enforcement Act (NetzDG), a law passed on the 30th of June by the German Government operates to fine social media companies up to 50 million Euros – approximately 360 crore rupees – if they persistently fail to remove hate speech from their platform within 24 hours of the content being posted. Companies will have up to one week where the illegality of the content is debatable.

NetzDG is intended to hold social media companies financially liable for the opinions posited using their platform. The Act will effectively subject social media platforms to the stricter content standards demanded of traditional media broadcasters.

Why was the act introduced?

Germany is one the world’s strictest regulators of hate speech. The State’s Criminal Code covers issues of defamation, public threats of violence and incitement to illegal conduct, and provides for incarceration for Holocaust denial or inciting hatred against minorities. Germany is a country sensitive to the persuasive power of oratory in radicalizing opinion. The parameters of these sensitivities are being tested as the influx of more than one million asylum seekers and migrants has catalyzed a notably belligerent public discourse.

In response to the changing discourse, Facebook and a number of other social media platforms consented in December 2015 to the terms of a code of conduct drafted by the Merkel Government. The code of conduct was intended to ensure that platforms adhered to Germany’s domestic law when regulating user content. However, a study monitoring Facebook’s compliance found the company deleted or blocked only 39 percent of reported content, a rate that put Facebook in breach of the agreement.

NetzDG turns the voluntary agreement into a binding legal obligation, making Facebook liable for any future failure to adhere to it’s terms.

In a statement made following the law’s enactment, German Justice Minister Heiko Maas declared ‘With this law, we put an end to the verbal law of the jungle on the Internet and protect the freedom of expression for all… This is not a limitation, but a prerequisite for freedom of expression’. The premise of the position of Minister Maas, and the starting point for the principles that validate the illegality of hate speech, is that verbal radicalization is often time the precursor to physical violence.

As the world’s predominant social media platform, Facebook has curated unprecedented, and in some respects, unconditioned access to people and their opinions. With consideration for the extent of Facebook’s access, this post will focus on the possible effects of the NetzDG on Facebook and it’s users.

Facebook’s predicament

  • Regulatory methods

How Facebook intends to observe the NetzDG is unclear. The social media platform, whose users now constitute one-quarter of the world’s population, has previously been unwilling to disclose the details of their internal censorship processes. However given the potential financial exposure, and the sustained increase in user content, Facebook must, to some extent, increase their capacity to evaluate and regulate reported content. In response, Facebook announced in May that it would nearly double the number of employees tasked with removing content that violated their guidelines. Whether this increase in capacity will be sufficient will be determined in time.

However, and regardless of the move’s effectiveness, Facebook’s near doubling of capacity implies that human interpretation is the final authority, and that implication raises a number of questions: To what extent can manual censorship keep up with the consistent increase in content? Can the same processes maintain efficacy in a climate where hate speech is increasingly prevalent in public discourse? If automated censorship is necessary, who decides the algorithm’s parameters and how sensitive might those parameters be to the nuances of expression and interpretation? In passing the NetzDG, the German Government has relinquished the State’s authority to fully decide the answer to these questions. The jurisdiction of the State in matters of communication regulation has, to a certain extent, been privatised.

  • Censorship standards

Recently, an investigative journalism platform called ProPublica claimed possession of documents purported to be internal censorship guidelines used at Facebook. The unverified guidelines instructed employees to remove the phrase ‘migrants are filth’ but permit ‘migrants are filthy’. Whether the documents are legitimate is to some extent irrelevant: the documents provide a useful example of the specificity required where the aim is to guide one person’s interpretation of language toward a specific end – in this instance toward a correct judgment of legality or illegality.

Regardless of the degree of specificity, it is impossible for any formulation of guidelines to cover every possible manifestation of hate speech. Thereby interpreting reported content will necessarily require some degree of discretion. This necessity begs the question: to what extent will affording private entities discretionary powers of censorship impede freedoms of communication? Particularly where the discretion afforded is conditioned by financial risk and a determination is required within a 24-hour period.

  • Facebook’s position

Statements made by Facebook prior to the legislation’s enactment expressed concern for the effect the Act will have on the already complex issue of content moderation. ‘The draft law provides an incentive to delete content that is not clearly illegal when social networks face such a disproportionate threat of fine’ a statement noted. ‘(The Act) would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies’. Facebook’s reservation is telling: the company’s reluctance to adopt the role of moderator to the extent required alludes to the potential consequences of the liability imposed by the Act. 

The problem with imposing this form of liability

 Any decision made by a social media platform to censor user content will be supported by the anti-discrimination principles prescribed by the NetzDG. However, where the motivation behind discretionary decision-making shifts away from social utility towards financial management the guiding considerations become efficiency and risk minimisation. Efficiency and risk minimisation in this instance requires Facebook to either (i) increase capacity, which in turn results in an increased financial burden, or (ii) adopt guidelines that minimise exposure.

Seemingly the approach adopted by Facebook is to increase capacity. However, Facebook’s concerns that the Act creates financial incentives to adopt guidelines that minimise exposure are significant. Such concerns demonstrate an understanding that requiring profit motivated companies to do the work of the State within a 24-hour time frame will necessarily require a different set of parameters than those imposed on the regulation of oral hate speech. If Facebook, in drafting and applying those parameters, decides to err on the side of caution and, in some instances, censor otherwise legal content, that decision will have directly infringed the freedom of communication enjoyed by German citizens.

A democracy must be able to accommodate contrasting opinions if it purports to respect rights of communication and expression. Conversely, limitations on rights enjoyed may be justified if they benefit the majority. The NetzDG is Germany’s recognition that the nature of online communication – the speed at which ideas promulgate and proliferate, and the disconnect between comment and consequence created by online anonymity – require the existing limitations on the freedom of communication be adapted. Whether instances of infringement, are warranted in the current climate is a difficult and complicated extension of the debate between the utility of regulating hate speech and the corresponding consequences for the freedoms of communication and expression. The decision to pass the NetzDG suggests the German Government considers the risk of infringement is acceptable when measured against the consequences of unfettered hate speech.

Public recognition that NetzDG poses a risk is important. It is best practice that within a democracy, any new limit to liberty, oral or otherwise, be questioned and a justification given. Here the justification seems well-founded. However the answers to the questions posed by sceptics may prove telling as Germany positions itself at the forefront of the debate over online censorship.

(William is a student at the University of Melbourne and is currently interning at CCG)

Advertisements

Reviewing the Law Commission’s latest hate speech recommendations

Introduction

The Law Commission has recently released a report on hate speech laws in India. The Supreme Court in Pravasi Bhalai vs. Union of India  asked the Law Commission to recommend changes to existing hate speech laws, and to “define the term hate speech”. The report discusses the history of hate speech jurisprudence in India and in certain other jurisdictions. In addition, it stresses upon the difficulty of defining hate speech and the lack of a concise definition. In the absence of such a definition, certain ‘identifying criterion’ have been mentioned, to detect instances of hate speech. It also discusses the theories of Jeremy Waldron (the ‘dignity’ principle) and makes a case for protecting the interests of minority communities by regulating speech. In this regard, two new sections for the IPC have been proposed. They are as follows:

(i) Prohibiting incitement to hatred-

“153 C. Whoever on grounds of religion, race, caste or community, sex, gender identity, sexual orientation, place of birth, residence, language, disability or tribe –

(a)  uses gravely threatening words either spoken or written, signs, visible representations within the hearing or sight of a person with the intention to cause, fear or alarm; or

(b)  advocates hatred by words either spoken or written, signs, visible representations, that causes incitement to violence shall be punishable with imprisonment of either description for a term which may extend to two years, and fine up to Rs 5000, or with both.”.

(ii) Causing fear, alarm, or provocation of violence in certain cases.

“505 A. Whoever in public intentionally on grounds of religion, race, caste or community, sex, gender, sexual orientation, place of birth, residence, language, disability or tribe-

uses words, or displays any writing, sign, or other visible representation which is gravely threatening, or derogatory;

(i) within the hearing or sight of a person, causing fear or alarm, or;

(ii) with the intent to provoke the use of unlawful violence,

against that person or another, shall be punished with imprisonment for a term which may extend to one year and/or fine up to Rs 5000, or both”.

The author is of the opinion that these recommended amendments are vague and broadly worded and could lead to a chilling effect and over-censorship. Here are a few reasons why the recommendations might not be compatible with free speech jurisprudence:

  1. Three – part test

Article 10 of the European Convention on Human Rights lays down three requirements that need be fulfilled to ensure that a restriction on free speech is warranted. The Law Commission report also discusses this test; it includes the necessity of a measure being ‘prescribed by law’, the need for a ‘legitimate aim’ and the test of ‘necessity and proportionality’.

Under the ‘prescribed by law’ standard, it is necessary for a restriction on free speech to be ‘clear and not ambiguous’. For instance, a phrase like ‘fear or alarm’ (existing in Section 153A and Section 505) has been criticized for being ‘vague’. Without defining or restricting this term, the public would not be aware of what constitutes ‘fear or alarm’ and would not know how to comply with the law. This standard has also been reiterated in Shreya Singhal vs. Union of India, where it was held that the ambiguously worded Section 66A could be problematic for innocent people since they would not be aware as to “which side of the line they fall” towards.

  1. Expanding scope to online offences?

The newly proposed sections also mention that any ‘gravely threatening words within the hearing or sight of a person’ would be penalized. Presumably, the phrase ‘within the sight or hearing of a person’ broadens the scope of this provision and could allow online speech to come under the ambit of the IPC. This phrase is similar to the wording of Section 5 (1) of the Criminal Justice (Public Order) Act, 1986[1] in the United Kingdom, which penalizes “harassment, alarm or distress”. Even though the section does not explicitly mention that it would cover offences on the internet, it has been presumed to do so.[2]

Similarly, if the intent of the framers of Section 153C is to expand the scope to cover online offences, it might introduce the same issues as the omitted Section 66A of the IT Act did. Section 66A intended to penalize the transmission of information which was ‘menacing’ and also which promoted ‘hatred or ill will’. The over-breadth of the terms in the section led to scrapping it. Another reason for scrapping the section was the lowering of the ‘incitement’ threshold (discussed below). Even though the proposed Section 153C does not provide for as many grounds (hatred, ill will, annoyance, etc.), it does explicitly lower the threshold from ‘incitement’ to ‘fear or alarm’/’discrimination’.

  1. The standard of ‘hate speech’

 The report also advocates for penalizing the ‘fear or alarm’ caused by such speech, since it could potentially have the effect of ‘marginalizing a section of the society’. As mentioned above, it has been explicitly mentioned that the threshold of ‘incitement to violence’ should be lowered and factors like ‘incitement to discrimination’ should also be considered.

The Shreya Singhal judgment drew a distinction between ‘discussion, advocacy and incitement’, stating that a restriction justifiable under Article 19(1) (a) of the Constitution would have to amount to ‘incitement’ and not merely ‘discussion’ or ‘advocacy’. This distinction was drawn so that discussing or advocating ideas which could lead to problems with ‘public order’ or disturbing the ‘security of the state’ could be differentiated from ‘incitement’ which establishes more of a ‘causal connection’.

Similarly, if the words used contribute to causing ‘fear or alarm’, the threshold of ‘incitement’ would be lowered, and constitutionally protected speech could be censored.

Conclusion

Despite the shortcomings mentioned above, the report is positive in a few ways. It draws attention to important contemporary issues affecting minority communities and how speech is often used to mobilize communities against each other. It also relies on Jeremy Waldron’s ‘dignity principle’ to make a case for imposing differing hate speech standards to protect minority communities. In addition, the grounds for discrimination now include ‘tribe’ and ‘sexual orientation’ amongst others.

However, existing case laws, coupled with recent instances of censorship, could make the insertion of these provisions troubling. India’s relationship with free speech is already dire; the Press Freedom Index ranks the country at 133 (out of 180) and the Freedom on the Net Report states that India is ‘partly free’ in this regard. The Law Commission might need to reconsider the recommendations, for the sake of upholding free speech. Pravasi Bhalai called for sanctioning politicians speeches, but the recommendations made by the Law Commission might be far reaching and the effects could be chilling.

 

[1] Section 5- Harassment, alarm or distress.
(1)A person is guilty of an offence if he—
(a)uses threatening or abusive words or behaviour, or disorderly behaviour, or
(b)displays any writing, sign or other visible representation which is threatening or abusive,
within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby.

[2] David Wall, Cybercrime: The Transformation of Crime in the Information Age, Page 123, Polity.

Online Extremism and Hate Speech – A Review of Alternate Regulatory Methods

Introduction

Online extremism and hate speech on the internet are growing global concerns. In 2015, the EU signed a code of conduct with social media companies including Facebook, Google and Twitter to effectively regulate hate speech on the internet. The code, amongst other measures, discussed stricter sanctions on intermediaries (social media companies) in the form of a ‘notice and takedown’ regime, a practice which has been criticised for effectively creating a ‘chilling’ effect and leading to over-censorship.

While this system is still in place, social media companies are attempting to adopt alternative regulatory methods. If companies could ensure that they routinely track their websites for illegal content, before government notices are issued, this could save them time and money. This post will attempt to offer some insight into alternative modes of regulation used by social media companies.

 YouTube Heroes – Content Regulation by Users

YouTube Heroes was launched in September, 2016 with the aim of efficiently regulating content. Under this initiative, YouTube users are allowed to ‘mass-flag’ content that goes against the Community Guidelines. The Community Guidelines specifically prohibit instances of hate speech. As per the Guidelines, content that “promotes violence or hatred against individuals based on certain attributes would amount to hate speech”. These ‘attributes’ include but are not limited to race, gender and religion.

‘Mass-flagging’ is just one of the many tools available to a YouTube Hero. The system is based on points and ranks, with users generating points for helping translate videos and for flagging inappropriate content. As they climb up the ranking system, users become privy to exclusive deals, like the ability to directly contact YouTube staff. ‘Mass-flagging’ is in essence the same as flagging a video, an option that YouTube already offered. However, the incentive of gaining access to private moderator forums and YouTube staff could lead to users flagging videos for extraneous reasons. While ‘mass-flagged’ videos are reviewed by YouTube moderators before being taken down, the initiative has still raised concerns.

It has been criticised for giving free rein to users, who may flag content because of personal biases, leading to ‘harassment campaigns’. Popular YouTube users have panned YouTube heroes, apprehending the possibility of their videos being targeted by ‘mobs’. Despite the review system in place, users have also expressed doubts about YouTube’s ability to accurately take down flagged content. Since the initiative is in its testing stage, it is difficult to determine what its outcome could be.

Facebook’s Online Civil Courage Initiative – Counter Speech

Governmental authorities across the world have been attempting to curb hate speech and online extremism in myriad ways. For instance, in November, 2015, an investigation involving one of Facebook’s European Managing Directors was launched. The Managing Director was accused of letting Facebook host hate speech. As the investigation drew to an end, Facebook representatives were not implicated. However, this investigation marked an increase in international pressure to effectively deal with hate speech.

Due to growing pressure from governmental authorities, Facebook began to  ‘outsource’ content removal.  In January of 2016, a German company called ‘Arvato’, was delegated the task of reviewing and taking down reported content, along with Facebook’s Community Operations Team. There is limited public information on the terms of service or rules Arvato is bound by. In the absence of any such information, ‘outsourcing’ could contribute to a private censorship regime. With no public guidelines in place, the outsourcing process is not transparent or accountable.

Additionally, Facebook has been working with other private bodies to regulate content online. Early in 2016, Facebook, in partnership with several NGOs, launched the Online Civil Courage Initiative (OCCI) to combat online extremism with counter-speech.   COO Sheryl Sandberg said that ‘censorship’ would not put an end to hate speech and that counter-speech would be a far more effective mode of regulation. Under this initiative, civil societies and NGO’s are ‘rewarded’ with ad credits, marketing resources, and strategic supportfor countering speech online.

It is pertinent to note that the Information Pack on Counter Speech Engagement is the only set of guidelines made public by OCCI. These guidelines provide information to plan a counter speech campaign. An interesting aspect of the information pack is the section on ‘Responding and Engaging during a campaign’. Under this section, comments are categorised as ‘supportive, negative, constructive, antagonistic’. A table suggests how different categories of comments should be ‘engaged with’. Surprisingly, ‘antagonistic’ comments should be ‘ignored, hidden or deleted’.  The information pack does not attempt to define any of the above categories. These vaguely worded guidelines could lead to confusion amongst NGOs. While studies have shown that counter-speech might be the most effective way to deal with online extremism, OCCI would have to make major changes to reach the goals of the counter-speech movement.

In October 2016, Facebook has reportedly come under the radar again. A German Federal Minister has stated that Facebook was still not effectively dealing with hate speech targeted at refugees and another investigation might be in the pipeline.

Conclusion

 It is yet to be seen whether the alternative regulatory methods adopted by social media companies will effectively deal with hate speech and online extremism.

It is important to note that social media companies are ‘outsourcing’ internal regulation to private bodies or users (YouTube Heroes, Arvato and OCCI). These private bodies might amplify the problems being faced by the intermediary liability system, which could lead to ‘over-censorship’. The system has been criticised for its ‘notice and takedown’ regime. Non-compliance of these takedown orders would attract strict sanctions. Fear of these sanctions could lead intermediaries to takedown content which could be in grey areas, but are not illegal.

However, under the internal regulation method, social media companies will continue to function under the fear of state pressure. Private bodies like Arvato and NGOs in affiliation with OCCI will also regulate content, with the incentive of receiving ‘advertisement credit’ and ‘points’.  This could lead to over-reporting for the sake of incentives. Coupled with pressure from the state, this might lead to a ‘chilling’ effect.

In addition, some of these private bodies do not operate in a transparent manner. For instance, providing public information on Arvato’s content regulation activities and the guidelines they are bound by would help create a far more accountable system. Further, the OCCI needs to have clearer, well-defined policies to fulfill the objectives of disseminating counter-speech.

 

 

Seven Judge Constitutional Bench defining the limits of Section 123(3) RPA: Day 2 Updates

NOTE: The title of the post was edited subsequent to the SC rejecting a plea to reexamine the meaning of Hindutva as interpreted in the 1996 Manohar Joshi judgment

Mr. Arvind P. Datar continued his arguments on Day 2. He commenced by referring to his earlier arguments from the previous day on the interplay of Sections 98 and 99 of the Representation of People Act, 1951 (‘RPA’) and reiterated the issues framed by the three judge bench mentioned here.

He submitted that there is no conflict with the stand taken by the Supreme Court in the Manohar Joshi case. He read out several relevant portions of the judgment which talks about the mandatory nature of Section 99 especially where a returned candidate has been alleged of corrupt practice vicariously for the conduct of any other person with his consent. He stated that the question regarding the returned candidate being guilty of corrupt practice can be decided only at the end of the trial after an enquiry against the other person is concluded by issuing them notices under Section 99 and accordingly, the trial under Sections 98 and 99 has to be a composite trial. According to Mr. Datar, it will lead to an absurd situation if the trial against the returned candidate is concluded first and then the proceedings under Section 99 are commenced for the purpose of deciding whether any other person is also required to be named as being guilty of the corrupt practice. After extensive arguments on this issue, Justice Goel was of the opinion that the trial under Sections 98 and 99 must be one composite trial which may take place in two steps but not in two separate phases.

The Court then posed a question to Mr. Datar regarding the stage at which notice can be issued to a third party and the nature of such notice under Sections 98 and 99 since none of the previous cases have examined or answered this issue. Mr. Datar reiterated his submission that Sections 98 and 99 have to be interpreted to mean that notice to a third party can be issued only during trial and not at the conclusion of the trial. Furthermore, the Chief Justice opined that a notice cannot be issued mechanically by the High Court. Before issuing such notice, the High Court has to be prima facie satisfied with the role of the collaborators in the commission of the corrupt practice.

In regard to the nature of notice under Section 99, Mr. Datar referred to the third issue framed by the three judge bench i.e.,

“On reaching the conclusion that consent is proved and prima facie corrupt practices are proved, whether the notice under Section 99(1) proviso (a) should contain, like mini judgment, extraction of pleadings of corrupt practices under Section 123, the evidence – oral and documentary and findings on each of the corrupt practices by each of the collaborators, if there are more than one, and supply them to all of them for giving an opportunity to be complied with?”

Mr. Datar contended that the notice to a third party or collaborator should contain the specific charges and specific portions of the speech allegedly amounting to corrupt practice. With reference to the Manohar Joshi case, he contended that the notice does not have to be in the form of a mini judgment. At this juncture, the Chief Justice expressed reservations on the use of the phrase “mini judgment” and opined that it is not appropriate to use the word in this context.

The Court also observed that the judicial principles that govern the analogous provision contained in Section 319 of the Criminal Procedure Code should also apply to Section 99 of the RPA. The Court further observed that since it is a quasi-criminal charge under the RPA, apart from the evaluation of evidence, the third person or collaborator to whom notice is being issued has to be informed of the reasons for such issuance of notice.

Thereafter, the Court considered the issue of ‘naming’ of a third person or a collaborator under Section 99. The issues under consideration were firstly, when can you ‘name’ a third party or collaborator and secondly, whether ‘naming’ is mandatory under Section 99. Mr. Datar contended that on a conjoint reading of Sections 98, 99 and 123(3), it is clear that there are only three categories of persons who can be named i.e. the candidate, his agent or any other person who has indulged in corrupt practices with the consent of the candidate.

While dealing with this subject, the Chief Justice posed a very pertinent question as to whether a person can be ‘named’ for corrupt practices under Section 99 for a speech made prior to the elections. To exhort his point further he gave an instance where elections may be scheduled for after four years. But, a person preparing to contest the elections may request some religious leaders to make speeches on his behalf. The candidate may then use the video recording of the speech at the time of elections. In such a situation can the religious leaders be ‘named’ under Section 99 for having committed a corrupt practice since the speeches were made prior to the notification of elections?

After testing various such propositions, the Chief Justice concluded that the test is not whether the speech was made prior to the elections but whether it was made with the consent of the candidate. If it was made with the consent of the candidate then the religious leaders can very well be named for having committed corrupt practices. He further questioned whether it is mandatory for the Court to name every person who has committed a corrupt practice but is not made a party. Mr. Datar replied in the negative to this proposition.

Mr. Datar through an example sought to distinguish between two scenarios – firstly, where two corrupt practices were committed, one by the candidate independently and one by his agent. Secondly, where the candidate is alleged of a corrupt practice based on the conduct of another. He reasoned that in the first scenario since the candidate had committed a corrupt practice independently, his agent need not be named. Whereas, in the second scenario, since the allegation of corrupt practice against the candidate was based on the conduct of another person, it was necessary to name that other person in order to prove corrupt practice. Therefore, ‘naming’ under Section 99 in the second scenario was contended to be mandatory and non-compliance of which would vitiate the finding of corrupt practice against the candidate.

Taking his argument forward, Mr. Datar said that there cannot be a straitjacket formula while coming to the conclusion of corrupt practice. As stated in the second scenario mentioned above, it is mandatory to name and hear the third person who made the speech before holding the candidate guilty of consenting to the corrupt practice.

The Chief Justice opined that there cannot be recording of finding of corrupt practice unless the person who has committed such corrupt practice is identified. The Chief Justice then considered the case of Mr. Abhiram Singh on its merits and observed that since all the evidence and findings are against Mr. Abhiram Singh and he was given an opportunity of being heard and to prove his case, then it is irrelevant whether the other persons were named or not. Therefore, this does not vitiate the finding or decision against him.

Post lunch, Mr. Shyam Divan appearing for one of the respondents in a connected matter commenced his arguments by narrating the brief facts of his case. Thereafter, he addressed the Court by referring to the legislative history of Section 123(3) of the RPA in order to better understand the scope and interpretation of the said section.

Mr. Divan elaborated that the issue for consideration before the bench was only limited to the interpretation of “his religion” appearing in Section 123(3). For a better understanding of Section 123(3), Mr. Divan briefly took the Court through the parliamentary debates pertaining to the section and also the various legislative amendments to the Section.

Mr. Divan will continue with his submissions when the hearing continues tomorrow.

Seven Judge Constitutional Bench defining the limits of Section 123(3) RPA: Day 1 Updates

NOTE: The title of the post was edited subsequent to the SC rejecting a plea to reexamine the meaning of Hindutva as interpreted in the 1996 Manohar Joshi judgment

Today, a seven-judge Constitutional Bench of the Supreme Court of India comprising of Chief Justice T.S Thakur and Justices Madan B. Lokur, S.A Bobde, A.K Goel, U.U Lalit, D.Y Chandrachud and L.N Rao commenced hearing a batch of petitions to examine whether appeals in the name of religion for votes during elections amounts to “corrupt practice” under Section 123(3) of the Representation of People Act, 1951 (‘RPA’). The Court is relooking at the 1996 judgment where it was held that seeking votes in the name of “Hindutva” or “Hinduism” is not a corrupt practice and therefore, not in violation of RPA.

One of the appeals which has been tagged in the present case was filed by a political leader Mr. Abhiram Singh whose election to the legislative assembly in 1990 was set aside by the Bombay High Court in 1991 for violation of this provision.

Section 123(3) of RPA prohibits a candidate or his agent or any other person with the candidate’s consent to appeal for votes or refrain from voting on the grounds of his religion, race, caste, community or language. The issue before the Court was whether ‘his religion” mentioned in this provision referred only to the candidate’s religion or if it also includes the voters’ religion to be considered as a corrupt practice.

Mr. Arvind P. Datar, appearing on behalf of Mr. Abhiram Singh commenced his arguments by stating that for the purposes of Section 123(3) a reference to religion in a candidate’s electoral speech per se would not deem it a corrupt practice. It would amount to a corrupt practice only if such a candidate uses religion, race, caste, community or language as a leverage to garner votes either by appealing people to vote or refrain from voting on such basis. He further argued that “his religion” mentioned in Section 123(3) should be construed to mean only the candidate or the ‘rival’ candidate’s religion. It should not be read to include the voters’ religion.

In this context, the Chief Justice through an example tried to counter Mr. Datar’s submission of giving “his religion” a restrictive meaning. He put forth a hypothetical situation where a candidate belonging to religion ‘A’ appeals to people belonging to religion ‘B’ to vote for him or otherwise they would incur “divine displeasure”. In the instant case, though the candidate is not referring to his own religion but he is still appealing on the basis of religion i.e. religion of the voters. He further gave instances to draw a distinction between appealing on the basis of the candidate’s religion and religion per se.

To emphasize his point further, the Chief Justice put forth other scenarios where religious sentiments may be invoked directly or indirectly to seek votes by the candidate or any other person on his behalf. During the course of the hearing, Justice Bobde observed that “making an appeal in the name of religion is destructive of Section 123(3). If you make an appeal in the name of religion, then you are emphasizing the difference or you are emphasizing the identity. It is wrong.” The Court was inclined to give a broad interpretation to “his religion” to include within its ambit not only the candidate or the rival candidate’s religion but also the voters’ religion. .

The hearing post lunch was more focused on the merits of Mr. Abhiram Singh’s petition which devolved on the interpretation of Sections 98 and 99 of the RPA. Section 98 of the RPA provides for the decisions that a High Court may arrive at after the conclusion of the trial of an election petition. Section 99(1)(a)(ii) of the RPA further provides that in case of an allegation of any corrupt practice at an election, the high court shall name all persons who have been proved to be guilty of any corrupt practice, however, before naming any person who is not a party to the petition, the high court shall give an opportunity to such person to appear before it and also give an opportunity of cross-examining any witness who has already been examined.

In this backdrop, the following issues which were framed earlier by the three judge bench were considered by this Court:

  1. Whether the learned Judge who tried the case is required to record prima facie conclusions on proof of the corrupt practices committed by the returned candidate or his agents or collaborators (leaders of the political party under whose banner the returned candidate contested the election) or any other person on his behalf?
  2. Whether the consent of the returned candidate is required to be proved and if so, on what basis and under what circumstances the consent is held proved?
  3. On reaching the conclusion that consent is proved and prima facie corrupt practices are proved, whether the notice under Section 99(1) proviso (a) should contain, like mini judgment, extraction of pleadings of corrupt practices under Section 123, the evidence – oral and documentary and findings on each of the corrupt practices by each of the collaborators, if there are more than one, and supply them to all of them for giving an opportunity to be complied with?

The Court was of the opinion that the answer to the second issue is in the affirmative and the Court shall only consider the remaining two issues.

Mr. Datar argued that the election of Mr. Abhiram Singh was set aside by the Bombay High Court on the basis of the speeches made by Mr. Balasaheb Thackeray and Mr. Pramod Mahajan in which they made reference to ‘Hindutva’ to garner votes for the Shiv Sena and BJP candidates. His argument was that before coming to this conclusion, the Bombay High Court should have complied with the mandatory procedure provided in the proviso to Section 99(1)(a) which has been explained above.

The Court countered this submission by stating that the finding against Mr. Abhiram Singh stands independently irrespective of whether the process laid down in Section 99 has been followed by the Bombay High Court or not. The Court also observed that in case the High Court names certain individuals for indulging in corrupt practice without following this provision, then it is for such individuals to approach the High Court under Section 99. The Court further stated that the judgment against Mr. Abhiram Singh certainly cannot be vitiated due to such non-compliance. Mr. Datar continued to stress on his argument that the process under section 99 of the RPA must be followed by the High Court before any conclusion of a corrupt practice has been arrived at. He relied on the judgment passed in the earlier cases to buttress his submissions. Additional updates from Day I are available here.

The seven-judge bench will continue the hearing today. We will keep you posted regarding the further developments in this case.

Free Speech & Violent Extremism: Special Rapporteur on Terrorism Weighs in

Written by Nakul Nayak

Yesterday, the Human Rights Council came out with an advance unedited version of a report (A/HRC/31/65) of the Special Rapporteur on protection of human rights while countering terrorism. This report in particular deals with protecting human rights while preventing and countering violent extremism. The Special Rapporteur, Ben Emmerson, has made some interesting remarks on extremist speech and its position in the hierarchy of protected and unprotected speech.

First, it should be noted that the Report tries to grapple with and distinguish between the commonly substituted terms “extremism” and “terrorism”. Noting that violent extremism lacks a consistent definition across countries and in some instances any definition at all, the Report goes on to liken it to terrorism. He also acknowledges the lack of understanding of the “radicalization process”, whereby innocent individuals become violent extremists. While the report does not suggest an approach to defining either term, it briefly contrasts the definitions laid down in various countries. However, there does seem to be some consensus on the ambit of violent extremism being broader than terrorism and consisting a range of subversive activities.

The important section of the Report, from the perspective of free speech, deals with incitement to violent extremism and efforts to counter it. The Report cites UN Resolution 1624(2005) that calls for the need to adopt legislative measures as effective means of addressing incitement to terrorism. However, the Report insists on the existence of “serious human rights concerns linked to the criminalization of incitement, in particular around freedom of expression and the right to privacy.[1] The Report then goes on to quote the UN Secretary General and the Special Rapporteur on Free Expression laying down various safeguards to laws criminalizing incitement. In particular, these laws must prosecute incitement that is directly related to terrorism, has the intention and effect of promoting terrorism, and includes judicial recourse, among other things.[2]

This gives us an opporutnity to discuss the standards of free speech restrictions in India. While the Supreme Court has expressly imported the American speech-protective standard of incitement to imminent lawless action in Arup Bhuyan, confusion still persists over the applicable standard in any justifying any restriction to free speech. The Supreme Court’s outdated ‘tendency’ test that does not require an intimate connection between speech and action still finds place in today’s law reports. This is evident from the celebrated case of Shreya Singhal. After a lengthy analysis of the public order jurisprudence in India and advocating for a direct connection between speech and public disorder, Justice Nariman muddies the water by examining section 66A of the IT Act under the ‘tendency’ test. Some coherence in incitement standards is needed.

The next pertinent segment of the Report dealt specifically with the impact of State measures on the restriction of expression, especially online content. Interestingly, the Report suggests that “Governments should counter ideas they disagree with, but should not seek to prevent non-violent ideas and opinions from being discussed.[3] This brings to mind the recent proposal of the National Security Council Secretariat (NSCS) seeking to set up a National Media Analytics Centre (NMAC) to counter negative online narratives through press releases, briefings, and conferences. While nothing concrete has come out, with the proposal still in the pipelines, safeguards must be implemented to assuage chilling effect and privacy concerns. It may be noted here that the Report’s remarks are limited to countering speech that form an indispensible part of the “radicalization process”. However, the NMAC covers negative content across the online spectrum, with its only marker being the “intensity or standing of the post”.

An important paragraph of the report- perhaps the gist of the free speech perspective in the combat of violent extremism- is the visible unease in determining the position of extremist speech glorifying and advocating terrorism. The Report notes the Human Rights Committee’s stand that terms such as “glorifying” terrorism must be clearly defined to avoid unnecessary incursions on free speech. At the same time, the “Secretary General has deprecated the ‘troubling trend’ of criminalizing glorification of terrorism, considering it to be an inappropriate restriction on expression.[4]

These propositions are in stark contrast to India’s terror legislation, the Unlawful Activities Prevention Act, 1967. Section 13 punishes anyone who “advocates, … advises … the commission of any unlawful activity …” An unlawful activity has been defined in section 2(o) to include speech acts that

  • supports a claim of “secession of a part of the territory of India from the Union” or,
  • which disclaims, questions … the sovereignty and territorial integrity of India” or,
  • rather draconically, “which causes … disaffection against India.

It will also be noted that all three offences are content-based restrictions on free speech i.e. limitations based purely on the subjects that the words deal in. Textually, these laws do not necessarily require an examination of the intent of the speaker, the impact of the words on the audience, or indeed the context in which the words are used.

Finally, the Report notes the views of the Special Rapporteur on Free Expression on hate speech and characterizing most efforts to counter them as “misguided”. However, the Report also “recognizes the importance of not letting hate speech go unchecked …” In one sense, the Special Rapporteur expressly rejects American First Amendment jurisprudence, which does not acknowledge hate speech as a permissible restriction to free speech. At the same time, the Report’s insistence that “the underlying causes should also be addressed” instead of being satisfied with mere prosecutions is a policy aspiration that needs serious thought in India.

This Report on violent extremism (as distinct from terrorism) is much-needed and timely. The strong human rights concerns espoused, with its attendant importance attached to a context-driven approach in prosecuting speech acts, are a sobering reminder about the many inadequacies of Indian terror law and its respect for fundamental rights.

Nakul Nayak was a Fellow at the Centre for Communication Governance from 2015-16.

[1] Para 24.

[2] Para 24.

[3] Para 38.

[4] Para 39.

Anupam Kher’s Cockroach Tweet: Cultural Reference or Hate Speech?

Written by Siddharth Manohar

The noise surrounding the recent controversy regarding a tweet by Indian actor (and UN Ambassador for Gender Equality) Anupam Kher made it difficult to look into why it caught so much attention. That it did is beyond doubt, garnering over six thousand hits, significantly more than almost all of his other tweets. It was also followed by plenty of coverage and promotion from its audience, who responded while sharing their own views as well. Here I try to look at whether there was any basis for the criticism that the tweet received, and the degree to which it was justified.

To start off, it would be useful to reproduce the lines in their original form:

घरों में पेस्ट कंट्रोल होता है तो कॉक्रोच, कीड़े मकोड़े इत्यादि बाहर निकलते है घर साफ़ होता हैवैसे ही आजकल देश का पेस्ट कंट्रोल चल रहा है

Which translates into: “During pest control in houses, the cockroaches and other insects etc. are removed. The house gets cleaned. Similarly, pest control of the country is going on these days.”

On an initial reading, it is a harmless and vague insult. The use of the term ‘cockroach’, which has attracted the most attention, seems to be employed as a characterisation of anything undesirable, be they problems, politics, or people. As a standalone insult, it remains a lot less venomous as compared to some of the other material that one may find on the website. Apart from containing a reference to one of the actor’s films, it is also vague and targets no group explicitly. It is therefore understandable that the issue has its share of people who may be bewildered by what could possibly be quite so harmful in this particular tweet, and are likely to pass off criticism as an overreaction that seems to be increasingly common.

To understand if there is a valid criticism of the tweet, we look at the larger context in which such a term is understood. The comparing of groups of people to animals and pests has a long, concrete, and troubling history. The process has over time and study acquired the name of ‘dehumanisation’, the process by which language and discourse is used to make a group of people seem ‘less-than-human’. It is a widely documented and extremely effective method of incitement to violence.

The reasoning behind its usage in the process is also interesting and relevant. According to Helen Fein (Benesch, 2008), the purpose of this kind of discourse is to put a certain group of people outside the limits of moral considerations and obligations. This is because the default moral understanding of a majority of people is underpinned by the principle that it is unacceptable to carry out violent acts of hate, or to kill any person. The repeated categorisation of a group of people as the ‘other’, and the polarisation of their identity as a group not worthy of human respect or equal rights, has the effect on the mind of the larger public. Acts of violence and crimes start to seem more acceptable and less outrageous when committed against this group, and this process of dehumanisation escalates over time.

The narratives most often target a specific identity, most famously that of ethnicity and religious identity. The most prominent examples of this occur during the inter-war period in Germany, where there was a large amount of material alienating and dehumanising those of Jewish religion. The content was systematically churned out by state agencies instructed with an agenda. Similarly, the build-up to the Rwandan genocide in 1994 saw a very strong narrative which demonised the Tutsi ethnic group in Rwanda, labeling them as Inyenzi (cockroaches) that cannot contribute to society because of who they were, their basic identity. This narrative creates a larger feeling of resentment amongst the public against the people of the target group, making it easier to commit acts of violence against them. Susan Benesch would argue that there cannot in fact be a large scale violent attack against a group of people that live amongst a majority without the cooperation or the tacit acceptance of that larger group of people.

The comparison of people to pests and animals has repeatedly been used as a tool in this process of moulding public sentiment against certain groups of people. In these cases, the narrative that it served to created helped in the execution of large scale genocidal operations that have left millions of people killed over the decades. Dehumanisation has also been included as part of an academic study devising a ten-step model of genocide. The historical evidence is in overwhelming suggestion that the use of such terms to build a narrative is part of a larger build up towards organised violence based on lines of group identity.

To suggest that an Indian actor is sending out a call for violence is ill-thought out, and ignorant of the complexity of the issue. What does need to be observed however, is how easily discussions are used to create and divide identities, and what values are ascribed to these identities. While healthy and vociferous debate forms an important part of a democracy, also equally important is the tangible effect that speech can have on its immediate surroundings. It is the effects and the consequences (and harm) of speech that give rise to justifications for its regulation, and it is therefore always useful to keep a watchful eye on where public discourse takes us.

textspace_1457429885_be702766 (1)