[July 8-15] CCG’s Week in Review: Curated News in Information Law and Policy

The Parliament passed the Aadhaar Amendment Bill, expected to have a far-reaching impact on data sharing with private companies and State Governments; France rolled out a new “digital tax” for Big Tech, Facebook slapped with a massive $5bn fine by the US FTC, while uncertainty over Huawei’s inclusion in India’s 5G trials deepens  — presenting this week’s most important developments in law and tech.

In focus this week: opinions and analyses of the Defence Budget for 2019-20.

Aadhaar

  • [July 8] Parliament passes Aadhaar amendment bill, The Hindu Business Line report.
  • [July 8] RS clears bill on voluntary use of Aadhaar as ID proof, Live Mint report.
  • [July 8] Techie moves Madras High Court assailing compulsory linking of Aadhaar with Universal Account Number (UAN) to avail EPFO pension, The Economic Times report.
  • [July 9] You are not bound to share Aadhaar data with schools, banks and telcos, DNA India report.
  • [July 9] ‘Ordinance on Aadhaar use doesn’t survive as House has cleared the Bill’: Centre tells SC, The Hindu report.
  • [July 10] Aadhaar Bill passage in Parliament: New clause helps secure non-NDA votes, The Economic Times report.
  • [July 11] PAN not linked to Aadhaar will become invalid from September, Business Standard report.
  • [July 11] Aadhaar amendments: New clause to allow use of Aadhaar data for state schemes, Live Mint report.
  • [July 11] Amendment: no Aadhaar for mobile wallet firms, The Economic Times report.
  • [July 11] All your Aadhaar fears are coming true in Assam, HuffPost India report.
  • [July 13] Rajya Sabha passes Aadhaar amendment Bill, allows to file complaint in case of security breach, India Today report.
  • [July 14] You may soon have to pay Rs. 10,000 as fine for entering wrong Aadhaar number for transactions, New 18 report.

Free Speech

  • [July 9] Twitter backs off broad limits on ‘Dehumanizing Speech’, The New York Times report.
  • [July 10] TikTok influencers charged for hate speech and attempting to incite communal violence, Business Insider report.
  • [July 13] White House Social Media recap, National Public Radio report, CNN report, The New York Times report, Engadget report. The Verge report.
  • [July 13] FIRs against 10 for poems that try to ‘hinder NRC’ in Assam, Times of India report.
  • [July 15] RSS wing calls for TikTok, Helo ban, The Economic Times report.

Data Protection

  • [July 8] Indian parliament members call for Data Protection Bill and TikTok ban, Inc42 report.
  • [July 8] British Airways fined record 183 million for data breach involving 500,000 customers: report, Medianama report, BBC report.
  • [July 9] Digital data protection to be a fundamental right in Brazil as amendment to constitution is approved, Medianama report.
  • [July 12] Not ‘Okay Google’: Firms admits that workers listen to audio from Assistant, Home, Medianama report, Fox News report, VRT News report.
  • [July 12] Google data breach faces review by Irish privacy watchdog, Bloomberg report.
  • [July 13] Facebook fined $ 5 billion by US regulators over privacy and data protection lapses, News 18 report, The Hindu Business Line report.
  • [July 13] Indian Govt is selling vehicle owner data to companies and citizens don’t have a clue, Inc42 report, Entrackr report.
  • [July 15] Data protection law must be the same for both private and government players, The New Indian Express report.

Digital India

  • [July 15] PMO panel seeks multinational companies’ inputs on making India electronics hub, ET Telecom report.

Data Localisation and E-Commerce

  • [July 11] Gautam Adani woos Amazon and Google with Indian data hubs, ET Telecom report.
  • [July 9] A tug of war hots the draft e-commerce policy. US tech giants want leeway in data localisation, ET Prime report. [paywall]
  • [July 15] Delhi and Bengaluru customs stop clearing ‘gifts’, Economic Times report, Medianama report.

Telecom/5G

  • [July 15] Inter-ministerial panel clears draft RFP to select auctioneer for 2019 spectrum sale, ET Telecom report.

More on Huawei

  • [July 10] Huawei makes Monaco world’s fully 5G country, Live Mint report.
  • [July 10] Huawei ban eased but tech can’t relax, Financial Times report.
  • [July 11] NSAB members, Chinese diplomat cross swords over Huawei, Indian Express report.
  • [July 12] Doubts over Huawei’s participation in India’s 5G rollout deepen, Live Mint report, NDTV Gadgets 360 report.
  • [July 14] Huawei plans extensive layoffs at its US operations, Live Mint report, The Economic Times report.
  • [July 13] US tells Britain: Fall in line over China and Huawei, or no trade deal, The Telegraph report
  • [July 14] US seeks to discredit UK spies in war against Huawei, The Times UK report.

Big Tech: Regulation

  • [July 11] France passes law taxing digital giants in defiance of US anger, Agence France Presse report.
  • [July 10] US Announces Inquiry of French Digital Tax that may end in tariffs, The New York Times report.

Cryptocurrencies

  • [July 9] Indian govt to educate top cops on cryptocurrencies, aiming to investigate crypto matters, CrytpoNewZ report.
  • [July 9] Facebook to Senators: Libra crypto will respect privacy, Coin Desk report.
  • [July 11] Winklevoss-backed crypto self-regulatory group prepares to woo congress, Coin Desk report.
  • [July 12] Japanese crypto exchange hacked, loses $ 32 million, The Hindu Business Line report, Coin Telegraph report.
  • [July 13] Study exposes how Russia, Iran and China are weaponizing crypto, CNN report.
  • [July 13] China’s illegal crypto mining crackdown could ignite a bitcoin price rally, CNN report.
  • [July 15] IRS confirms it trained staff to find crypto wallets, Coin Desk report.

Emerging Tech

  • [July 9] AI in cybersecurity expected to surpass $38 billion, Security Boulevard report.
  • [July 14] How aritifical intelligence is solving different business problems, Financial Express report.
  • [July 14] Why AI is the future of cybersecurity, Forbes report.

Cybersecurity

  • [July 8] Chinese hackers demonstrate their global cyber espionage reach with breach at 10 of the world’s biggest telecoms, CPO Magazine report.
  • [July 12] Businesses in India tapping AI to improve cybersecurity, The Economic Times report, Fortune India report.
  • [July 15] Indian IT managers facing budget crunch for cybersecurity, The Economic Times report.

Tech and Law Enforcement: Surveillance and Cyber Crime

  • [July 8] NCRB invites bids to implement Automated Facial Recognition System, Medianama report.
  • [July 9]  The chase gets a lot easier for tech-wielding cops now, The Economic Times report.
  • [July 9] Delhi government begins installing CCTV cameras inside classrooms to prevent crime: report, Medianama report. Times now News report.
  • [July 10] Instagram announces two new anti-bullying features, Instagram’s announcement, Thw Wall Street Journal report, Medianama report.
  • [July 11] WhatsApp messages can be traced without diluting encryption, Zee News report.
  • [July 12] New POCSO bill to expand child porn definition to include anime, adults posing depicting children, Medianma report, Hindustan Times report.
  • [July 12] SC refuses to stay installation of CCTV cameras in Delhi Government schools, Medianama report, Bar & Bench report.

Tech and Military

  • [July 8] Japan-India security cooperation: Asian giants to expand their relations to Space, Financial Express report.
  • [July 8] Bill to tag individuals as ‘terrorist’ introduced in LS, Opposition protests: The Unlawful Activities (Prevention) Act Amendment Bill, 2019, Business Standard report
  • [July 8] Government introduces Bill in Lok Sabha to amend National Investigation Agency Act, The Economic Times report.
  • [July 8] Govt to procure 1.86 lakh bullet proof jackets by April next, The Hindu Business Line report.
  • [July 8] India, Russia agree on new payment mode for S-400 deal to get around US sanctions, The Print report.
  • [July 9] National e-Governance Division to revamp management app for the army, The Week report.
  • [July 9] Amazon, Microsoft wage war over the Pentagon’s ‘war cloud’,  NDTV Gadgets 360 report
  • [July 10] Last chance to get tech: Navy says negotiating next 6 subs to take years, Business Standard report.
  • [July 10] Tactical communications market size in the US region is projected to experience substantial proceeds by 2024, Tech Mag report.
  • [July 11] Govt says looking at tech to seal northern and eastern borders, Live Mint report.
  • [July 11] Army man arrested for leaking info on national security, The Tribune report.
  • [July 12] Wait for sniper rifles gets longer, MoD retracts the RFP issued last year, Financial Express report.
  • [July 12] India, Russia discuss space cooperation, The Hindu report
  • [July 12] Israel arms company signs $100 million missile deal with Indian army, Middle East Monitor report.

Defense Budget: Reports and Analyses

  • [July 8] Budget 2019: India redirects foreign aid to Indian ocean countries, NSCS expenditure hiked, Business Standard report.
  • [July 8] Laxman K Behera, Institute for Defense Studies and Analysis, India’s Defence budget 2019-20.
  • [July 8] PK Vasudeva, Deccan Herald, An alarming fall: Defence Budget 2019-20.
  • [July 8] Mihir S Sharma, Business Standard, Budget 2019: India won’t become a superpower with these allocations.
  • [July 9] PRS Legislative Research’s analysis: Ministry of Defence Demands for Grants 2019-20.
  • [July 9] Why Sitharaman’s budgetary allocation is unlikely to satisfy defence establishment, The Economic Times report.
  • [July 10] Brahma Chellaney, Hindustan Times, India’s defence planning has no clear strategic direction.
  • [July 10] Harsh V Pant, Live Mint Opinion, We need not whine about India’s small defence budget.
  • [July 12] Commodore Anil Jai Singh, Financial Express, Budget 2019: Optimising the Defence Budget and the need for organizational reform.
  • [July 13] Shekhar Gupta, The Print, Modi isn’t about to change India into national security state like Pakistan and bankrupt it.
  • [July 13] Budget 2019: Cybersecurity – a holy grail for government’s Digital India dream, Financial Express analysis.
  • [July 15] Ravi Shanker Kapoor, News 18 Opinion, Cost of not carrying out economic reforms: acute shortage of funds for military modernization.

Opinions and Anlayses

  • [July 8] Adam Bemma, Al Jazeera, Is Sri Lanka using the Easter attacks to limit digital freedom?
  • [July 9] Dr M Suresh Babu and Dr K Bhavana Raj, The Hans India, Data Protection Bill – boon or bane for digital economy?
  • [July 8] Walter Olson, The CATO Institute blog, One year later, the harms of Europe’s data-privacy law.
  • [July 8]  Jack Parrock, Euro News, The Brief: Data privacy v. surveillance transatlantic clash.
  • [July 9] Abhijit Mukhopadhyaya and Nishant Jha, ORF, Amidst US-China standoff Huawei battles for survival.
  • [July 10] Kuldip Kunmar, The Economic Times, Budget 2019 shows govt’s will to use Aadhaar to track financial transactions.
  • [July 11] Darryn Pollock, Forbes, Is Facebook forming a crypto mafia as Libra foundation members boost each other’s businesses?
  • [July 12] Amitendu Palit, Financial Express, India ditches data dialogue again.
  • [July 12] Shantanu Roy-Chaudhary, The Diplomat, India-China-Sri Lanka Triangle: The Defense Dimension.
  • [July 12] Richard A Clarke and Robert K Knake, The Wall Street Journal, US companies learn to defend themselves in cyberspace.
  • [July 12] Simon Chandler, Coin Telegraph, US Sanctions on Iran Crypto Mining— Inevitable or Impossible?
  • [July 12] Shekhar Chnadra, Scientific American, What to expect from India’s second Moon mission.
  • [July 14] Agnidipto Tarafder and Siddharth Sonkar, The Wire, Will the Aadhaar Amendment Bill Pass Judicial Scrutiny?
  • [July 14] Scott Williams, Live Wire, Your crypto overlords are coming…
  • [July 15] Why Google cloud hasn’t picked up yet in India, ET Telecom report

Launching our Mapping Report on ‘Hate Speech Laws in India’

We are launching our report on hate speech laws in India. This report maps criminal laws and procedural laws, along with medium-specific laws used by the state to regulate hate speech.

This report was launched last week at a panel on ‘Harmful Speech in India’, as a part of UNESCOs World Press Freedom Day. The panel was comprised of Pamela Philipose, Aakar Patel, Chinmayi Arun and Sukumar Muralidharan. The panelists discussed the state of harmful speech in the country and regulatory issues arising from the proliferation of hate speech.

We hope that this report can serve as a basis for further research on hate speech in India, and can serve as a resource for practicing lawyers, journalists and activists.

We would appreciate any feedback, please feel free to leave a comment or to write to us.

The report can be found here.

Call for Applications – Civil Liberties

Update: Deadline to apply extended to January 15, 2018! 

The Centre for Communication Governance at the National Law University Delhi (CCG) invites applications for research positions in its Civil Liberties team on a full time basis.

About the Centre

The Centre for Communication Governance is the only academic research centre dedicated to working on the information law and policy in India and in a short span of four years has become a leading centre on information policy in Asia. It seeks to embed human rights and good governance within communication policy and protect digital rights in India through rigorous academic research and capacity building.

The Centre routinely works with a range of international academic institutions and policy organizations. These include the Berkman Klein Center at Harvard University, the Programme in Comparative Media Law and Policy at the University of Oxford, the Center for Internet and Society at Stanford Law School, Hans Bredow Institute at the University of Hamburg and the Global Network of Interdisciplinary Internet & Society Research Centers. We engage regularly with government institutions and ministries such as the Law Commission of India, Ministry of Electronics & IT, Ministry of External Affairs, the Ministry of Law & Justice and the International Telecommunications Union. We work actively to provide the executive and judiciary with useful research in the course of their decision making on issues relating to civil liberties and technology.

CCG has also constituted two advisory boards, a faculty board within the University and one consisting of academic members of our international networks. These boards will oversee the functioning of the Centre and provide high level inputs on the work undertaken by CCG from time to time.

About Our Work

The work at CCG is designed to build competence and raise the quality of discourse in research and policy around issues concerning civil liberties and the Internet, cybersecurity and global Internet governance. The research and policy output is intended to catalyze effective, research-led policy making and informed public debate around issues in technology and Internet governance.

The work of our civil liberties team covers the following broad areas:

  1. Freedom of Speech & Expression: Research in this area focuses on human rights and civil liberties in the context of the Internet and emerging communication technology in India. Research on this track squarely addresses the research gaps around the architecture of the Internet and its impact on free expression.
  2. Access, Markets and Public Interest: The research under this area will consider questions of access, including how the human right to free speech could help to guarantee access to the Internet. It would identify areas where competition law would need to intervene to ensure free, fair and human rights-compatible access to the Internet, and opportunities to communicate using online services. Work in this area will consider how existing competition and consumer protection law could be applied to ensure that freedom of expression in new media, and particularly the internet, is protected given market realities on the supply side. We will under this track put out material regarding the net neutrality concerns that are closely associated to the competition, innovation, media diversity and protection of human rights especially rights to free expression and the right to receive information and particularly to substantive equality across media. It will also engage with existing theories of media pluralism in this context.
  3. Privacy, Surveillance & Big Data: Research in this area focuses on surveillance as well as data protection practices, laws and policies. The work may be directed either at the normative questions that arise in the context of surveillance or data protection, or at empirical work, including data gathering and analysis, with a view to enabling policy and law makers to better understand the pragmatic concerns in developing realistic and effective privacy frameworks. This work area extends to the right to be forgotten and data localization.

Role

CCG is a young and continuously evolving organization and the members of the centre are expected to be active participants in building a collaborative, merit led institution and a lasting community of highly motivated young researchers.

Selected applicants will ordinarily be expected to design and produce units of publishable research with Director(s)/ senior staff members. They will also be recommending and assisting with designing and executing policy positions and external actions on a broad range of information policy issues.

Equally, they will also be expected to participate in other work, including writing opinion pieces, blog posts, press releases, memoranda, and help with outreach. The selected applicants will also represent CCG in the media and at other events, roundtables, and conferences and before relevant governmental, and other bodies. In addition, they will have organizational responsibilities such as providing inputs for grant applications, networking and designing and executing Centre events.

Qualifications

The Centre welcomes applications from candidates with advanced degrees in law, public policy and international relations.

  • All candidates must preferably be able to provide evidence of an interest in human rights / technology law and / or policy / Internet governance/ national security law as well. In addition, they must have a demonstrable capacity for high-quality, independent work.
  • In addition to written work, a project/ programme manager within CCG will be expected to play a significant leadership role. This ranges from proactive agenda-setting to administrative and team-building responsibilities.
  • Successful candidates for the project / programme manager position should show great initiative in managing both their own and their team’s workloads. They will also be expected to lead and motivate their team through high stress periods and in responding to pressing policy questions.

However, the length of your resume is less important than the other qualities we are looking for. As a young, rapidly-expanding organization, CCG anticipates that all members of the Centre will have to manage large burdens of substantive as well as administrative work in addition to research. We are looking for highly motivated candidates with a deep commitment to building information policy that supports and enables human rights and democracy.

At CCG, we aim very high and we demand a lot of each other in the workplace. We take great pride in high-quality outputs and value individuality and perfectionism. We like to maintain the highest ethical standards in our work and workplace, and love people who manage all of this while being as kind and generous as possible to colleagues, collaborators and everyone else within our networks. A sense of humour will be most welcome. Even if you do not necessarily fit requirements mentioned in the two bulleted points but bring to us the other qualities we look for, we will love to hear from you.

[The Centre reserves the right to not fill the position(s) if it does not find suitable candidates among the applicants.]

Positions

Based on experience and qualifications, successful applicants will be placed in the following positions. Please note that our interview panel has the discretion to determine which profile would be most suitable for each applicant.

  • Programme Officer (2-4 years’ work experience)
  • Project Manager (4-6 years’ work experience)
  • Programme Manager (6-8 years’ work experience)

A Master’s degree from a highly regarded programme might count towards work experience.

CCG staff work at the Centre’s offices at National Law University Delhi’s campus. The positions on offer are for duration of one year and we expect a commitment for two years.

Remuneration

The salaries will be competitive, and will usually range from ₹50,000 to ₹1,20,000 per month, depending on multiple factors including relevant experience, the position and the larger research project under which the candidate can be accommodated.

Where candidates demonstrate exceptional competence in the opinion of the interview panel, there is a possibility for greater remuneration.

Procedure for Application

Interested applicants are required to send the following information and materials by December 30, 2017 to ccgcareers@nludelhi.ac.in.

  1. Curriculum Vitae (maximum 2 double spaced pages)
  2. Expression of Interest in joining CCG (maximum 500 words).
  3. Contact details for two referees (at least one academic). Referees must be informed that they might be contacted for an oral reference or a brief written reference.
  4. One academic writing sample of between 1000 and 1200 words (essay or extract, published or unpublished).

Shortlisted applicants may be called for an interview.

 

Understanding the ‘NetzDG’: Privatised censorship under Germany’s new hate speech law

By William James Hargreaves

The Network Enforcement Act

The Network Enforcement Act (NetzDG), a law passed on the 30th of June by the German Government operates to fine social media companies up to 50 million Euros – approximately 360 crore rupees – if they persistently fail to remove hate speech from their platform within 24 hours of the content being posted. Companies will have up to one week where the illegality of the content is debatable.

NetzDG is intended to hold social media companies financially liable for the opinions posited using their platform. The Act will effectively subject social media platforms to the stricter content standards demanded of traditional media broadcasters.

Why was the act introduced?

Germany is one the world’s strictest regulators of hate speech. The State’s Criminal Code covers issues of defamation, public threats of violence and incitement to illegal conduct, and provides for incarceration for Holocaust denial or inciting hatred against minorities. Germany is a country sensitive to the persuasive power of oratory in radicalizing opinion. The parameters of these sensitivities are being tested as the influx of more than one million asylum seekers and migrants has catalyzed a notably belligerent public discourse.

In response to the changing discourse, Facebook and a number of other social media platforms consented in December 2015 to the terms of a code of conduct drafted by the Merkel Government. The code of conduct was intended to ensure that platforms adhered to Germany’s domestic law when regulating user content. However, a study monitoring Facebook’s compliance found the company deleted or blocked only 39 percent of reported content, a rate that put Facebook in breach of the agreement.

NetzDG turns the voluntary agreement into a binding legal obligation, making Facebook liable for any future failure to adhere to it’s terms.

In a statement made following the law’s enactment, German Justice Minister Heiko Maas declared ‘With this law, we put an end to the verbal law of the jungle on the Internet and protect the freedom of expression for all… This is not a limitation, but a prerequisite for freedom of expression’. The premise of the position of Minister Maas, and the starting point for the principles that validate the illegality of hate speech, is that verbal radicalization is often time the precursor to physical violence.

As the world’s predominant social media platform, Facebook has curated unprecedented, and in some respects, unconditioned access to people and their opinions. With consideration for the extent of Facebook’s access, this post will focus on the possible effects of the NetzDG on Facebook and it’s users.

Facebook’s predicament

  • Regulatory methods

How Facebook intends to observe the NetzDG is unclear. The social media platform, whose users now constitute one-quarter of the world’s population, has previously been unwilling to disclose the details of their internal censorship processes. However given the potential financial exposure, and the sustained increase in user content, Facebook must, to some extent, increase their capacity to evaluate and regulate reported content. In response, Facebook announced in May that it would nearly double the number of employees tasked with removing content that violated their guidelines. Whether this increase in capacity will be sufficient will be determined in time.

However, and regardless of the move’s effectiveness, Facebook’s near doubling of capacity implies that human interpretation is the final authority, and that implication raises a number of questions: To what extent can manual censorship keep up with the consistent increase in content? Can the same processes maintain efficacy in a climate where hate speech is increasingly prevalent in public discourse? If automated censorship is necessary, who decides the algorithm’s parameters and how sensitive might those parameters be to the nuances of expression and interpretation? In passing the NetzDG, the German Government has relinquished the State’s authority to fully decide the answer to these questions. The jurisdiction of the State in matters of communication regulation has, to a certain extent, been privatised.

  • Censorship standards

Recently, an investigative journalism platform called ProPublica claimed possession of documents purported to be internal censorship guidelines used at Facebook. The unverified guidelines instructed employees to remove the phrase ‘migrants are filth’ but permit ‘migrants are filthy’. Whether the documents are legitimate is to some extent irrelevant: the documents provide a useful example of the specificity required where the aim is to guide one person’s interpretation of language toward a specific end – in this instance toward a correct judgment of legality or illegality.

Regardless of the degree of specificity, it is impossible for any formulation of guidelines to cover every possible manifestation of hate speech. Thereby interpreting reported content will necessarily require some degree of discretion. This necessity begs the question: to what extent will affording private entities discretionary powers of censorship impede freedoms of communication? Particularly where the discretion afforded is conditioned by financial risk and a determination is required within a 24-hour period.

  • Facebook’s position

Statements made by Facebook prior to the legislation’s enactment expressed concern for the effect the Act will have on the already complex issue of content moderation. ‘The draft law provides an incentive to delete content that is not clearly illegal when social networks face such a disproportionate threat of fine’ a statement noted. ‘(The Act) would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies’. Facebook’s reservation is telling: the company’s reluctance to adopt the role of moderator to the extent required alludes to the potential consequences of the liability imposed by the Act. 

The problem with imposing this form of liability

 Any decision made by a social media platform to censor user content will be supported by the anti-discrimination principles prescribed by the NetzDG. However, where the motivation behind discretionary decision-making shifts away from social utility towards financial management the guiding considerations become efficiency and risk minimisation. Efficiency and risk minimisation in this instance requires Facebook to either (i) increase capacity, which in turn results in an increased financial burden, or (ii) adopt guidelines that minimise exposure.

Seemingly the approach adopted by Facebook is to increase capacity. However, Facebook’s concerns that the Act creates financial incentives to adopt guidelines that minimise exposure are significant. Such concerns demonstrate an understanding that requiring profit motivated companies to do the work of the State within a 24-hour time frame will necessarily require a different set of parameters than those imposed on the regulation of oral hate speech. If Facebook, in drafting and applying those parameters, decides to err on the side of caution and, in some instances, censor otherwise legal content, that decision will have directly infringed the freedom of communication enjoyed by German citizens.

A democracy must be able to accommodate contrasting opinions if it purports to respect rights of communication and expression. Conversely, limitations on rights enjoyed may be justified if they benefit the majority. The NetzDG is Germany’s recognition that the nature of online communication – the speed at which ideas promulgate and proliferate, and the disconnect between comment and consequence created by online anonymity – require the existing limitations on the freedom of communication be adapted. Whether instances of infringement, are warranted in the current climate is a difficult and complicated extension of the debate between the utility of regulating hate speech and the corresponding consequences for the freedoms of communication and expression. The decision to pass the NetzDG suggests the German Government considers the risk of infringement is acceptable when measured against the consequences of unfettered hate speech.

Public recognition that NetzDG poses a risk is important. It is best practice that within a democracy, any new limit to liberty, oral or otherwise, be questioned and a justification given. Here the justification seems well-founded. However the answers to the questions posed by sceptics may prove telling as Germany positions itself at the forefront of the debate over online censorship.

(William is a student at the University of Melbourne and is currently interning at CCG)

Reviewing the Law Commission’s latest hate speech recommendations

By Arpita Biswas

Introduction

The Law Commission has recently released a report on hate speech laws in India. The Supreme Court in Pravasi Bhalai vs. Union of India  asked the Law Commission to recommend changes to existing hate speech laws, and to “define the term hate speech”. The report discusses the history of hate speech jurisprudence in India and in certain other jurisdictions. In addition, it stresses upon the difficulty of defining hate speech and the lack of a concise definition. In the absence of such a definition, certain ‘identifying criterion’ have been mentioned, to detect instances of hate speech. It also discusses the theories of Jeremy Waldron (the ‘dignity’ principle) and makes a case for protecting the interests of minority communities by regulating speech. In this regard, two new sections for the IPC have been proposed. They are as follows:

(i) Prohibiting incitement to hatred-

“153 C. Whoever on grounds of religion, race, caste or community, sex, gender identity, sexual orientation, place of birth, residence, language, disability or tribe –

(a)  uses gravely threatening words either spoken or written, signs, visible representations within the hearing or sight of a person with the intention to cause, fear or alarm; or

(b)  advocates hatred by words either spoken or written, signs, visible representations, that causes incitement to violence shall be punishable with imprisonment of either description for a term which may extend to two years, and fine up to Rs 5000, or with both.”.

(ii) Causing fear, alarm, or provocation of violence in certain cases.

“505 A. Whoever in public intentionally on grounds of religion, race, caste or community, sex, gender, sexual orientation, place of birth, residence, language, disability or tribe-

uses words, or displays any writing, sign, or other visible representation which is gravely threatening, or derogatory;

(i) within the hearing or sight of a person, causing fear or alarm, or;

(ii) with the intent to provoke the use of unlawful violence,

against that person or another, shall be punished with imprisonment for a term which may extend to one year and/or fine up to Rs 5000, or both”.

The author is of the opinion that these recommended amendments are vague and broadly worded and could lead to a chilling effect and over-censorship. Here are a few reasons why the recommendations might not be compatible with free speech jurisprudence:

  1. Three – part test

Article 10 of the European Convention on Human Rights lays down three requirements that need be fulfilled to ensure that a restriction on free speech is warranted. The Law Commission report also discusses this test; it includes the necessity of a measure being ‘prescribed by law’, the need for a ‘legitimate aim’ and the test of ‘necessity and proportionality’.

Under the ‘prescribed by law’ standard, it is necessary for a restriction on free speech to be ‘clear and not ambiguous’. For instance, a phrase like ‘fear or alarm’ (existing in Section 153A and Section 505) has been criticized for being ‘vague’. Without defining or restricting this term, the public would not be aware of what constitutes ‘fear or alarm’ and would not know how to comply with the law. This standard has also been reiterated in Shreya Singhal vs. Union of India, where it was held that the ambiguously worded Section 66A could be problematic for innocent people since they would not be aware as to “which side of the line they fall” towards.

  1. Expanding scope to online offences?

The newly proposed sections also mention that any ‘gravely threatening words within the hearing or sight of a person’ would be penalized. Presumably, the phrase ‘within the sight or hearing of a person’ broadens the scope of this provision and could allow online speech to come under the ambit of the IPC. This phrase is similar to the wording of Section 5 (1) of the Criminal Justice (Public Order) Act, 1986[1] in the United Kingdom, which penalizes “harassment, alarm or distress”. Even though the section does not explicitly mention that it would cover offences on the internet, it has been presumed to do so.[2]

Similarly, if the intent of the framers of Section 153C is to expand the scope to cover online offences, it might introduce the same issues as the omitted Section 66A of the IT Act did. Section 66A intended to penalize the transmission of information which was ‘menacing’ and also which promoted ‘hatred or ill will’. The over-breadth of the terms in the section led to scrapping it. Another reason for scrapping the section was the lowering of the ‘incitement’ threshold (discussed below). Even though the proposed Section 153C does not provide for as many grounds (hatred, ill will, annoyance, etc.), it does explicitly lower the threshold from ‘incitement’ to ‘fear or alarm’/’discrimination’.

  1. The standard of ‘hate speech’

 The report also advocates for penalizing the ‘fear or alarm’ caused by such speech, since it could potentially have the effect of ‘marginalizing a section of the society’. As mentioned above, it has been explicitly mentioned that the threshold of ‘incitement to violence’ should be lowered and factors like ‘incitement to discrimination’ should also be considered.

The Shreya Singhal judgment drew a distinction between ‘discussion, advocacy and incitement’, stating that a restriction justifiable under Article 19(1) (a) of the Constitution would have to amount to ‘incitement’ and not merely ‘discussion’ or ‘advocacy’. This distinction was drawn so that discussing or advocating ideas which could lead to problems with ‘public order’ or disturbing the ‘security of the state’ could be differentiated from ‘incitement’ which establishes more of a ‘causal connection’.

Similarly, if the words used contribute to causing ‘fear or alarm’, the threshold of ‘incitement’ would be lowered, and constitutionally protected speech could be censored.

Conclusion

Despite the shortcomings mentioned above, the report is positive in a few ways. It draws attention to important contemporary issues affecting minority communities and how speech is often used to mobilize communities against each other. It also relies on Jeremy Waldron’s ‘dignity principle’ to make a case for imposing differing hate speech standards to protect minority communities. In addition, the grounds for discrimination now include ‘tribe’ and ‘sexual orientation’ amongst others.

However, existing case laws, coupled with recent instances of censorship, could make the insertion of these provisions troubling. India’s relationship with free speech is already dire; the Press Freedom Index ranks the country at 133 (out of 180) and the Freedom on the Net Report states that India is ‘partly free’ in this regard. The Law Commission might need to reconsider the recommendations, for the sake of upholding free speech. Pravasi Bhalai called for sanctioning politicians speeches, but the recommendations made by the Law Commission might be far reaching and the effects could be chilling.

[1] Section 5- Harassment, alarm or distress.
(1)A person is guilty of an offence if he—
(a)uses threatening or abusive words or behaviour, or disorderly behaviour, or
(b)displays any writing, sign or other visible representation which is threatening or abusive,
within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby.

[2] David Wall, Cybercrime: The Transformation of Crime in the Information Age, Page 123, Polity.

Arpita Biswas is a Programme Officer at the Centre for Communication Governance at National Law University Delhi

Online Extremism and Hate Speech – A Review of Alternate Regulatory Methods

By Arpita Biswas

Introduction

Online extremism and hate speech on the internet are growing global concerns. In 2015, the EU signed a code of conduct with social media companies including Facebook, Google and Twitter to effectively regulate hate speech on the internet. The code, amongst other measures, discussed stricter sanctions on intermediaries (social media companies) in the form of a ‘notice and takedown’ regime, a practice which has been criticised for effectively creating a ‘chilling’ effect and leading to over-censorship.

While this system is still in place, social media companies are attempting to adopt alternative regulatory methods. If companies could ensure that they routinely track their websites for illegal content, before government notices are issued, this could save them time and money. This post will attempt to offer some insight into alternative modes of regulation used by social media companies.

 YouTube Heroes – Content Regulation by Users

YouTube Heroes was launched in September, 2016 with the aim of efficiently regulating content. Under this initiative, YouTube users are allowed to ‘mass-flag’ content that goes against the Community Guidelines. The Community Guidelines specifically prohibit instances of hate speech. As per the Guidelines, content that “promotes violence or hatred against individuals based on certain attributes would amount to hate speech”. These ‘attributes’ include but are not limited to race, gender and religion.

‘Mass-flagging’ is just one of the many tools available to a YouTube Hero. The system is based on points and ranks, with users generating points for helping translate videos and for flagging inappropriate content. As they climb up the ranking system, users become privy to exclusive deals, like the ability to directly contact YouTube staff. ‘Mass-flagging’ is in essence the same as flagging a video, an option that YouTube already offered. However, the incentive of gaining access to private moderator forums and YouTube staff could lead to users flagging videos for extraneous reasons. While ‘mass-flagged’ videos are reviewed by YouTube moderators before being taken down, the initiative has still raised concerns.

It has been criticised for giving free rein to users, who may flag content because of personal biases, leading to ‘harassment campaigns’. Popular YouTube users have panned YouTube heroes, apprehending the possibility of their videos being targeted by ‘mobs’. Despite the review system in place, users have also expressed doubts about YouTube’s ability to accurately take down flagged content. Since the initiative is in its testing stage, it is difficult to determine what its outcome could be.

Facebook’s Online Civil Courage Initiative – Counter Speech

Governmental authorities across the world have been attempting to curb hate speech and online extremism in myriad ways. For instance, in November, 2015, an investigation involving one of Facebook’s European Managing Directors was launched. The Managing Director was accused of letting Facebook host hate speech. As the investigation drew to an end, Facebook representatives were not implicated. However, this investigation marked an increase in international pressure to effectively deal with hate speech.

Due to growing pressure from governmental authorities, Facebook began to  ‘outsource’ content removal.  In January of 2016, a German company called ‘Arvato’, was delegated the task of reviewing and taking down reported content, along with Facebook’s Community Operations Team. There is limited public information on the terms of service or rules Arvato is bound by. In the absence of any such information, ‘outsourcing’ could contribute to a private censorship regime. With no public guidelines in place, the outsourcing process is not transparent or accountable.

Additionally, Facebook has been working with other private bodies to regulate content online. Early in 2016, Facebook, in partnership with several NGOs, launched the Online Civil Courage Initiative (OCCI) to combat online extremism with counter-speech.   COO Sheryl Sandberg said that ‘censorship’ would not put an end to hate speech and that counter-speech would be a far more effective mode of regulation. Under this initiative, civil societies and NGO’s are ‘rewarded’ with ad credits, marketing resources, and strategic supportfor countering speech online.

It is pertinent to note that the Information Pack on Counter Speech Engagement is the only set of guidelines made public by OCCI. These guidelines provide information to plan a counter speech campaign. An interesting aspect of the information pack is the section on ‘Responding and Engaging during a campaign’. Under this section, comments are categorised as ‘supportive, negative, constructive, antagonistic’. A table suggests how different categories of comments should be ‘engaged with’. Surprisingly, ‘antagonistic’ comments should be ‘ignored, hidden or deleted’.  The information pack does not attempt to define any of the above categories. These vaguely worded guidelines could lead to confusion amongst NGOs. While studies have shown that counter-speech might be the most effective way to deal with online extremism, OCCI would have to make major changes to reach the goals of the counter-speech movement.

In October 2016, Facebook has reportedly come under the radar again. A German Federal Minister has stated that Facebook was still not effectively dealing with hate speech targeted at refugees and another investigation might be in the pipeline.

Conclusion

 It is yet to be seen whether the alternative regulatory methods adopted by social media companies will effectively deal with hate speech and online extremism.

It is important to note that social media companies are ‘outsourcing’ internal regulation to private bodies or users (YouTube Heroes, Arvato and OCCI). These private bodies might amplify the problems being faced by the intermediary liability system, which could lead to ‘over-censorship’. The system has been criticised for its ‘notice and takedown’ regime. Non-compliance of these takedown orders would attract strict sanctions. Fear of these sanctions could lead intermediaries to takedown content which could be in grey areas, but are not illegal.

However, under the internal regulation method, social media companies will continue to function under the fear of state pressure. Private bodies like Arvato and NGOs in affiliation with OCCI will also regulate content, with the incentive of receiving ‘advertisement credit’ and ‘points’.  This could lead to over-reporting for the sake of incentives. Coupled with pressure from the state, this might lead to a ‘chilling’ effect.

In addition, some of these private bodies do not operate in a transparent manner. For instance, providing public information on Arvato’s content regulation activities and the guidelines they are bound by would help create a far more accountable system. Further, the OCCI needs to have clearer, well-defined policies to fulfill the objectives of disseminating counter-speech.

Arpita Biswas is a Programme Officer at the Centre for Communication Governance at National Law University Delhi

 

 

Seven Judge Constitutional Bench defining the limits of Section 123(3) RPA: Day 2 Updates

By Kasturika Kaumudi

NOTE: The title of the post was edited subsequent to the SC rejecting a plea to reexamine the meaning of Hindutva as interpreted in the 1996 Manohar Joshi judgment

Mr. Arvind P. Datar continued his arguments on Day 2. He commenced by referring to his earlier arguments from the previous day on the interplay of Sections 98 and 99 of the Representation of People Act, 1951 (‘RPA’) and reiterated the issues framed by the three judge bench mentioned here.

He submitted that there is no conflict with the stand taken by the Supreme Court in the Manohar Joshi case. He read out several relevant portions of the judgment which talks about the mandatory nature of Section 99 especially where a returned candidate has been alleged of corrupt practice vicariously for the conduct of any other person with his consent. He stated that the question regarding the returned candidate being guilty of corrupt practice can be decided only at the end of the trial after an enquiry against the other person is concluded by issuing them notices under Section 99 and accordingly, the trial under Sections 98 and 99 has to be a composite trial. According to Mr. Datar, it will lead to an absurd situation if the trial against the returned candidate is concluded first and then the proceedings under Section 99 are commenced for the purpose of deciding whether any other person is also required to be named as being guilty of the corrupt practice. After extensive arguments on this issue, Justice Goel was of the opinion that the trial under Sections 98 and 99 must be one composite trial which may take place in two steps but not in two separate phases.

The Court then posed a question to Mr. Datar regarding the stage at which notice can be issued to a third party and the nature of such notice under Sections 98 and 99 since none of the previous cases have examined or answered this issue. Mr. Datar reiterated his submission that Sections 98 and 99 have to be interpreted to mean that notice to a third party can be issued only during trial and not at the conclusion of the trial. Furthermore, the Chief Justice opined that a notice cannot be issued mechanically by the High Court. Before issuing such notice, the High Court has to be prima facie satisfied with the role of the collaborators in the commission of the corrupt practice.

In regard to the nature of notice under Section 99, Mr. Datar referred to the third issue framed by the three judge bench i.e.,

“On reaching the conclusion that consent is proved and prima facie corrupt practices are proved, whether the notice under Section 99(1) proviso (a) should contain, like mini judgment, extraction of pleadings of corrupt practices under Section 123, the evidence – oral and documentary and findings on each of the corrupt practices by each of the collaborators, if there are more than one, and supply them to all of them for giving an opportunity to be complied with?”

Mr. Datar contended that the notice to a third party or collaborator should contain the specific charges and specific portions of the speech allegedly amounting to corrupt practice. With reference to the Manohar Joshi case, he contended that the notice does not have to be in the form of a mini judgment. At this juncture, the Chief Justice expressed reservations on the use of the phrase “mini judgment” and opined that it is not appropriate to use the word in this context.

The Court also observed that the judicial principles that govern the analogous provision contained in Section 319 of the Criminal Procedure Code should also apply to Section 99 of the RPA. The Court further observed that since it is a quasi-criminal charge under the RPA, apart from the evaluation of evidence, the third person or collaborator to whom notice is being issued has to be informed of the reasons for such issuance of notice.

Thereafter, the Court considered the issue of ‘naming’ of a third person or a collaborator under Section 99. The issues under consideration were firstly, when can you ‘name’ a third party or collaborator and secondly, whether ‘naming’ is mandatory under Section 99. Mr. Datar contended that on a conjoint reading of Sections 98, 99 and 123(3), it is clear that there are only three categories of persons who can be named i.e. the candidate, his agent or any other person who has indulged in corrupt practices with the consent of the candidate.

While dealing with this subject, the Chief Justice posed a very pertinent question as to whether a person can be ‘named’ for corrupt practices under Section 99 for a speech made prior to the elections. To exhort his point further he gave an instance where elections may be scheduled for after four years. But, a person preparing to contest the elections may request some religious leaders to make speeches on his behalf. The candidate may then use the video recording of the speech at the time of elections. In such a situation can the religious leaders be ‘named’ under Section 99 for having committed a corrupt practice since the speeches were made prior to the notification of elections?

After testing various such propositions, the Chief Justice concluded that the test is not whether the speech was made prior to the elections but whether it was made with the consent of the candidate. If it was made with the consent of the candidate then the religious leaders can very well be named for having committed corrupt practices. He further questioned whether it is mandatory for the Court to name every person who has committed a corrupt practice but is not made a party. Mr. Datar replied in the negative to this proposition.

Mr. Datar through an example sought to distinguish between two scenarios – firstly, where two corrupt practices were committed, one by the candidate independently and one by his agent. Secondly, where the candidate is alleged of a corrupt practice based on the conduct of another. He reasoned that in the first scenario since the candidate had committed a corrupt practice independently, his agent need not be named. Whereas, in the second scenario, since the allegation of corrupt practice against the candidate was based on the conduct of another person, it was necessary to name that other person in order to prove corrupt practice. Therefore, ‘naming’ under Section 99 in the second scenario was contended to be mandatory and non-compliance of which would vitiate the finding of corrupt practice against the candidate.

Taking his argument forward, Mr. Datar said that there cannot be a straitjacket formula while coming to the conclusion of corrupt practice. As stated in the second scenario mentioned above, it is mandatory to name and hear the third person who made the speech before holding the candidate guilty of consenting to the corrupt practice.

The Chief Justice opined that there cannot be recording of finding of corrupt practice unless the person who has committed such corrupt practice is identified. The Chief Justice then considered the case of Mr. Abhiram Singh on its merits and observed that since all the evidence and findings are against Mr. Abhiram Singh and he was given an opportunity of being heard and to prove his case, then it is irrelevant whether the other persons were named or not. Therefore, this does not vitiate the finding or decision against him.

Post lunch, Mr. Shyam Divan appearing for one of the respondents in a connected matter commenced his arguments by narrating the brief facts of his case. Thereafter, he addressed the Court by referring to the legislative history of Section 123(3) of the RPA in order to better understand the scope and interpretation of the said section.

Mr. Divan elaborated that the issue for consideration before the bench was only limited to the interpretation of “his religion” appearing in Section 123(3). For a better understanding of Section 123(3), Mr. Divan briefly took the Court through the parliamentary debates pertaining to the section and also the various legislative amendments to the Section.

Mr. Divan will continue with his submissions when the hearing continues tomorrow.

Seven Judge Constitutional Bench defining the limits of Section 123(3) RPA: Day 1 Updates

By Kasturika Kaumudi

NOTE: The title of the post was edited subsequent to the SC rejecting a plea to reexamine the meaning of Hindutva as interpreted in the 1996 Manohar Joshi judgment

Today, a seven-judge Constitutional Bench of the Supreme Court of India comprising of Chief Justice T.S Thakur and Justices Madan B. Lokur, S.A Bobde, A.K Goel, U.U Lalit, D.Y Chandrachud and L.N Rao commenced hearing a batch of petitions to examine whether appeals in the name of religion for votes during elections amounts to “corrupt practice” under Section 123(3) of the Representation of People Act, 1951 (‘RPA’). The Court is relooking at the 1996 judgment where it was held that seeking votes in the name of “Hindutva” or “Hinduism” is not a corrupt practice and therefore, not in violation of RPA.

One of the appeals which has been tagged in the present case was filed by a political leader Mr. Abhiram Singh whose election to the legislative assembly in 1990 was set aside by the Bombay High Court in 1991 for violation of this provision.

Section 123(3) of RPA prohibits a candidate or his agent or any other person with the candidate’s consent to appeal for votes or refrain from voting on the grounds of his religion, race, caste, community or language. The issue before the Court was whether ‘his religion” mentioned in this provision referred only to the candidate’s religion or if it also includes the voters’ religion to be considered as a corrupt practice.

Mr. Arvind P. Datar, appearing on behalf of Mr. Abhiram Singh commenced his arguments by stating that for the purposes of Section 123(3) a reference to religion in a candidate’s electoral speech per se would not deem it a corrupt practice. It would amount to a corrupt practice only if such a candidate uses religion, race, caste, community or language as a leverage to garner votes either by appealing people to vote or refrain from voting on such basis. He further argued that “his religion” mentioned in Section 123(3) should be construed to mean only the candidate or the ‘rival’ candidate’s religion. It should not be read to include the voters’ religion.

In this context, the Chief Justice through an example tried to counter Mr. Datar’s submission of giving “his religion” a restrictive meaning. He put forth a hypothetical situation where a candidate belonging to religion ‘A’ appeals to people belonging to religion ‘B’ to vote for him or otherwise they would incur “divine displeasure”. In the instant case, though the candidate is not referring to his own religion but he is still appealing on the basis of religion i.e. religion of the voters. He further gave instances to draw a distinction between appealing on the basis of the candidate’s religion and religion per se.

To emphasize his point further, the Chief Justice put forth other scenarios where religious sentiments may be invoked directly or indirectly to seek votes by the candidate or any other person on his behalf. During the course of the hearing, Justice Bobde observed that “making an appeal in the name of religion is destructive of Section 123(3). If you make an appeal in the name of religion, then you are emphasizing the difference or you are emphasizing the identity. It is wrong.” The Court was inclined to give a broad interpretation to “his religion” to include within its ambit not only the candidate or the rival candidate’s religion but also the voters’ religion. .

The hearing post lunch was more focused on the merits of Mr. Abhiram Singh’s petition which devolved on the interpretation of Sections 98 and 99 of the RPA. Section 98 of the RPA provides for the decisions that a High Court may arrive at after the conclusion of the trial of an election petition. Section 99(1)(a)(ii) of the RPA further provides that in case of an allegation of any corrupt practice at an election, the high court shall name all persons who have been proved to be guilty of any corrupt practice, however, before naming any person who is not a party to the petition, the high court shall give an opportunity to such person to appear before it and also give an opportunity of cross-examining any witness who has already been examined.

In this backdrop, the following issues which were framed earlier by the three judge bench were considered by this Court:

  1. Whether the learned Judge who tried the case is required to record prima facie conclusions on proof of the corrupt practices committed by the returned candidate or his agents or collaborators (leaders of the political party under whose banner the returned candidate contested the election) or any other person on his behalf?
  2. Whether the consent of the returned candidate is required to be proved and if so, on what basis and under what circumstances the consent is held proved?
  3. On reaching the conclusion that consent is proved and prima facie corrupt practices are proved, whether the notice under Section 99(1) proviso (a) should contain, like mini judgment, extraction of pleadings of corrupt practices under Section 123, the evidence – oral and documentary and findings on each of the corrupt practices by each of the collaborators, if there are more than one, and supply them to all of them for giving an opportunity to be complied with?

The Court was of the opinion that the answer to the second issue is in the affirmative and the Court shall only consider the remaining two issues.

Mr. Datar argued that the election of Mr. Abhiram Singh was set aside by the Bombay High Court on the basis of the speeches made by Mr. Balasaheb Thackeray and Mr. Pramod Mahajan in which they made reference to ‘Hindutva’ to garner votes for the Shiv Sena and BJP candidates. His argument was that before coming to this conclusion, the Bombay High Court should have complied with the mandatory procedure provided in the proviso to Section 99(1)(a) which has been explained above.

The Court countered this submission by stating that the finding against Mr. Abhiram Singh stands independently irrespective of whether the process laid down in Section 99 has been followed by the Bombay High Court or not. The Court also observed that in case the High Court names certain individuals for indulging in corrupt practice without following this provision, then it is for such individuals to approach the High Court under Section 99. The Court further stated that the judgment against Mr. Abhiram Singh certainly cannot be vitiated due to such non-compliance. Mr. Datar continued to stress on his argument that the process under section 99 of the RPA must be followed by the High Court before any conclusion of a corrupt practice has been arrived at. He relied on the judgment passed in the earlier cases to buttress his submissions. Additional updates from Day I are available here.

The seven-judge bench will continue the hearing today. We will keep you posted regarding the further developments in this case.

Kasturika Kaumudi is a Programme Officer with the Centre for Communication Governance at National Law University Delhi

Free Speech & Violent Extremism: Special Rapporteur on Terrorism Weighs in

Written by Nakul Nayak

Yesterday, the Human Rights Council came out with an advance unedited version of a report (A/HRC/31/65) of the Special Rapporteur on protection of human rights while countering terrorism. This report in particular deals with protecting human rights while preventing and countering violent extremism. The Special Rapporteur, Ben Emmerson, has made some interesting remarks on extremist speech and its position in the hierarchy of protected and unprotected speech.

First, it should be noted that the Report tries to grapple with and distinguish between the commonly substituted terms “extremism” and “terrorism”. Noting that violent extremism lacks a consistent definition across countries and in some instances any definition at all, the Report goes on to liken it to terrorism. He also acknowledges the lack of understanding of the “radicalization process”, whereby innocent individuals become violent extremists. While the report does not suggest an approach to defining either term, it briefly contrasts the definitions laid down in various countries. However, there does seem to be some consensus on the ambit of violent extremism being broader than terrorism and consisting a range of subversive activities.

The important section of the Report, from the perspective of free speech, deals with incitement to violent extremism and efforts to counter it. The Report cites UN Resolution 1624(2005) that calls for the need to adopt legislative measures as effective means of addressing incitement to terrorism. However, the Report insists on the existence of “serious human rights concerns linked to the criminalization of incitement, in particular around freedom of expression and the right to privacy.[1] The Report then goes on to quote the UN Secretary General and the Special Rapporteur on Free Expression laying down various safeguards to laws criminalizing incitement. In particular, these laws must prosecute incitement that is directly related to terrorism, has the intention and effect of promoting terrorism, and includes judicial recourse, among other things.[2]

This gives us an opporutnity to discuss the standards of free speech restrictions in India. While the Supreme Court has expressly imported the American speech-protective standard of incitement to imminent lawless action in Arup Bhuyan, confusion still persists over the applicable standard in any justifying any restriction to free speech. The Supreme Court’s outdated ‘tendency’ test that does not require an intimate connection between speech and action still finds place in today’s law reports. This is evident from the celebrated case of Shreya Singhal. After a lengthy analysis of the public order jurisprudence in India and advocating for a direct connection between speech and public disorder, Justice Nariman muddies the water by examining section 66A of the IT Act under the ‘tendency’ test. Some coherence in incitement standards is needed.

The next pertinent segment of the Report dealt specifically with the impact of State measures on the restriction of expression, especially online content. Interestingly, the Report suggests that “Governments should counter ideas they disagree with, but should not seek to prevent non-violent ideas and opinions from being discussed.[3] This brings to mind the recent proposal of the National Security Council Secretariat (NSCS) seeking to set up a National Media Analytics Centre (NMAC) to counter negative online narratives through press releases, briefings, and conferences. While nothing concrete has come out, with the proposal still in the pipelines, safeguards must be implemented to assuage chilling effect and privacy concerns. It may be noted here that the Report’s remarks are limited to countering speech that form an indispensible part of the “radicalization process”. However, the NMAC covers negative content across the online spectrum, with its only marker being the “intensity or standing of the post”.

An important paragraph of the report- perhaps the gist of the free speech perspective in the combat of violent extremism- is the visible unease in determining the position of extremist speech glorifying and advocating terrorism. The Report notes the Human Rights Committee’s stand that terms such as “glorifying” terrorism must be clearly defined to avoid unnecessary incursions on free speech. At the same time, the “Secretary General has deprecated the ‘troubling trend’ of criminalizing glorification of terrorism, considering it to be an inappropriate restriction on expression.[4]

These propositions are in stark contrast to India’s terror legislation, the Unlawful Activities Prevention Act, 1967. Section 13 punishes anyone who “advocates, … advises … the commission of any unlawful activity …” An unlawful activity has been defined in section 2(o) to include speech acts that

  • supports a claim of “secession of a part of the territory of India from the Union” or,
  • which disclaims, questions … the sovereignty and territorial integrity of India” or,
  • rather draconically, “which causes … disaffection against India.

It will also be noted that all three offences are content-based restrictions on free speech i.e. limitations based purely on the subjects that the words deal in. Textually, these laws do not necessarily require an examination of the intent of the speaker, the impact of the words on the audience, or indeed the context in which the words are used.

Finally, the Report notes the views of the Special Rapporteur on Free Expression on hate speech and characterizing most efforts to counter them as “misguided”. However, the Report also “recognizes the importance of not letting hate speech go unchecked …” In one sense, the Special Rapporteur expressly rejects American First Amendment jurisprudence, which does not acknowledge hate speech as a permissible restriction to free speech. At the same time, the Report’s insistence that “the underlying causes should also be addressed” instead of being satisfied with mere prosecutions is a policy aspiration that needs serious thought in India.

This Report on violent extremism (as distinct from terrorism) is much-needed and timely. The strong human rights concerns espoused, with its attendant importance attached to a context-driven approach in prosecuting speech acts, are a sobering reminder about the many inadequacies of Indian terror law and its respect for fundamental rights.

Nakul Nayak was a Fellow at the Centre for Communication Governance from 2015-16.

[1] Para 24.

[2] Para 24.

[3] Para 38.

[4] Para 39.

Anupam Kher’s Cockroach Tweet: Cultural Reference or Hate Speech?

Written by Siddharth Manohar

The noise surrounding the recent controversy regarding a tweet by Indian actor (and UN Ambassador for Gender Equality) Anupam Kher made it difficult to look into why it caught so much attention. That it did is beyond doubt, garnering over six thousand hits, significantly more than almost all of his other tweets. It was also followed by plenty of coverage and promotion from its audience, who responded while sharing their own views as well. Here I try to look at whether there was any basis for the criticism that the tweet received, and the degree to which it was justified.

To start off, it would be useful to reproduce the lines in their original form:

घरों में पेस्ट कंट्रोल होता है तो कॉक्रोच, कीड़े मकोड़े इत्यादि बाहर निकलते है घर साफ़ होता हैवैसे ही आजकल देश का पेस्ट कंट्रोल चल रहा है

Which translates into: “During pest control in houses, the cockroaches and other insects etc. are removed. The house gets cleaned. Similarly, pest control of the country is going on these days.”

On an initial reading, it is a harmless and vague insult. The use of the term ‘cockroach’, which has attracted the most attention, seems to be employed as a characterisation of anything undesirable, be they problems, politics, or people. As a standalone insult, it remains a lot less venomous as compared to some of the other material that one may find on the website. Apart from containing a reference to one of the actor’s films, it is also vague and targets no group explicitly. It is therefore understandable that the issue has its share of people who may be bewildered by what could possibly be quite so harmful in this particular tweet, and are likely to pass off criticism as an overreaction that seems to be increasingly common.

To understand if there is a valid criticism of the tweet, we look at the larger context in which such a term is understood. The comparing of groups of people to animals and pests has a long, concrete, and troubling history. The process has over time and study acquired the name of ‘dehumanisation’, the process by which language and discourse is used to make a group of people seem ‘less-than-human’. It is a widely documented and extremely effective method of incitement to violence.

The reasoning behind its usage in the process is also interesting and relevant. According to Helen Fein (Benesch, 2008), the purpose of this kind of discourse is to put a certain group of people outside the limits of moral considerations and obligations. This is because the default moral understanding of a majority of people is underpinned by the principle that it is unacceptable to carry out violent acts of hate, or to kill any person. The repeated categorisation of a group of people as the ‘other’, and the polarisation of their identity as a group not worthy of human respect or equal rights, has the effect on the mind of the larger public. Acts of violence and crimes start to seem more acceptable and less outrageous when committed against this group, and this process of dehumanisation escalates over time.

The narratives most often target a specific identity, most famously that of ethnicity and religious identity. The most prominent examples of this occur during the inter-war period in Germany, where there was a large amount of material alienating and dehumanising those of Jewish religion. The content was systematically churned out by state agencies instructed with an agenda. Similarly, the build-up to the Rwandan genocide in 1994 saw a very strong narrative which demonised the Tutsi ethnic group in Rwanda, labeling them as Inyenzi (cockroaches) that cannot contribute to society because of who they were, their basic identity. This narrative creates a larger feeling of resentment amongst the public against the people of the target group, making it easier to commit acts of violence against them. Susan Benesch would argue that there cannot in fact be a large scale violent attack against a group of people that live amongst a majority without the cooperation or the tacit acceptance of that larger group of people.

The comparison of people to pests and animals has repeatedly been used as a tool in this process of moulding public sentiment against certain groups of people. In these cases, the narrative that it served to created helped in the execution of large scale genocidal operations that have left millions of people killed over the decades. Dehumanisation has also been included as part of an academic study devising a ten-step model of genocide. The historical evidence is in overwhelming suggestion that the use of such terms to build a narrative is part of a larger build up towards organised violence based on lines of group identity.

To suggest that an Indian actor is sending out a call for violence is ill-thought out, and ignorant of the complexity of the issue. What does need to be observed however, is how easily discussions are used to create and divide identities, and what values are ascribed to these identities. While healthy and vociferous debate forms an important part of a democracy, also equally important is the tangible effect that speech can have on its immediate surroundings. It is the effects and the consequences (and harm) of speech that give rise to justifications for its regulation, and it is therefore always useful to keep a watchful eye on where public discourse takes us.

textspace_1457429885_be702766 (1)