[September 16-23] CCG’s Week in Review: Curated News in Information Law and Policy

Cybersecurity experts warned of a new ‘SIM jacking’ threat, the Kerala High Court recognizes a right to access internet as the internet shutdown in Kashmir entered its 50th day; more updates on the linkage of Aadhaar with voter IDs and social media as the Indian Army braces itself to adopt AI – presenting this week’s most important developments in law, tech and national security.


  • [Sep 16] Here are the amendments the Election Commission wants to the Representation of the People Act for Aadhaar-Voter ID linkage, Medianama report.
  • [Sep 18] Why Maj. Gen. Vombatkere has challenged Aadhaar Amendment Act in the Supreme Court; On WhatsApp and traceability, Medianama report.
  • [Sep 19] Drop in Aadhaar enrolments in J&K, The Economic Times report.
  • [Sep 20] In-principle decision to link Aadhaar with GST registration, The Economic Times report.
  • [Sep 23] Aadhaar card is now mandatory for nominees of your EPF account, Livemint report.

Digital India

  • [Sep 18] Indo-US ICT working group to meet on Sept 30, Oct 1, Medianama report.
  • [Sep 17] NITI Aayog frames guidelines for automated inspection of vehicles, ET Auto report.
  • [Sep 17] What TikTok told MEITY about its intermediary status, data collection, and policies for children, Medianama report.
  • [Sep 18] Soon, lands will have Aadhaar-like unique numbers, The Economic Times report; Business Today report.
  • [Sep 18] Drones to be used to digitally map India: report, Medianama report.
  • [Sep 18] PMO panel to release policy to boost handset manufacturing in India: report, Medianama report.
  • [Sep 19] Karnataka to set up exclusive body to boost innovation, The Hindu report.
  • [Sep 20] ‘Right To Access Internet Is Part Of Right To Privacy And Right To Education’: Kerala HC, Live Law report; Hindu report; NDTV report.

Data Protection and Privacy

  • [Sep 15] Privacy debate between govt, Facebook continues; no winner yet, Money Control report.
  • [Sep 16] Singapore, Philippines sign MoU on personal data protection, The Manila Times report.
  • [Sep 16] Industry wants careful drafting of regulations on non-personal data, The Economic Times report.
  • [Sep 16] Here are the top three reasons why data protection is required in every business, Firstpost report.
  • [Sep 20] Sensitive, super-sensitive data must be stored locally in india: RS PRasad, Business Standard report.
  • [Sep 20] Yet another data leak in Indian government database, exoposes multiple citizen IDs, Inc42 report.
  • [Sep 22] Infosys co-founder Kris Gopalakrishnan to lead panel on protection of non-personal data, Financial Express report.


  • [Sep 16] Odisha government makes e-marketplace mandatory for procurements, The New Indian Express report.
  • [Sep 16] US antitrust officials investigate Amazon’s marketplace practices, Medianama report.
  • [Sep 17] Ministry of COnsumer Affairs extends deadline for comments on draft E-Commerce Guidelines 2019 to October 31, Medianama report.

FinTech and Digital Payments

  • [Sep 16] WhatsApp to roll out its payment services by end of this year: report, Medianama report; The Economic Times report.
  • [Sep 18] RBI proposes norms to regulate payment gateways and payment aggregators, Entrackr report.
  • [Sep 19] Regulatory shock for fintech firms: RBI blocks unregulated access to consumer credit history, Entrackr report.
  • [Sep 19] DSCI, MeitY and Google India join hands for ‘Digital Payment Abhiyan’, The Economic Times report.


  • [Sep 16] The toss of a Bitcoin: How crypto ban will hurt 5 mn Indians, 20k Blockchain developers, The Economic Times report.
  • [Sep 16] US Sanctions three alleged crypto hacking groups from North Korea, Coin Desk report.
  • [Sep 16] Crypto firms assess how to comply with anti-money laundering standards, The Wall Street Journal report.
  • [Sep 19] Bitcoin and crypto wallets are now being targeted by malware, Forbes report.
  • [Sep 21] Weekends are for Altcoins when it comes to crypto market gains, ET Markets report.
  • [Sep 21] Chinese officials surprisingly chill on crypto, Decrypt report.


  • [Sep 13] Ransomware has a new target, Defense One report.
  • [Sep 16] Deep learning and machine learning to transform cybersecurity, Tech Wire Asia report.
  • [Sep 16] America needs a whole-of-society approach to cybersecurity. ‘Grand Challenges’ can help, Defense One report.
  • [Sep 17] Financial asset firm PCI ordered to pay $1.5 million for poor cybersecurity practices, ZD Net report.
  • [Sep 20] Current Act outdated, need to include cyber security in IT legal framework: DCA chief, The Indian Express report.
  • [Sep 20] 10% of IT budget should be used for cybersecurity: Rear Admiral Mohit Gupta, ET Times report.
  • [Sep 20] Once hacked, twice shy: How auto supplier Harman learned to fight cyber car jackers, ET Auto report.
  • [Sep 21] Cybersecurity a big opportunity for telcos, says IBM executive, The Economic Times report.
  • [Sep 23] Cybersecurity experts raise alarm over new SIM jacking threat, The New Indian Express report.
  • [Sep 23] Cybersecurity: Tackling the menace of phishing, Financial Express report.

Tech and Law Enforcement; Surveillance

  • [Sep 15] Facebook moots ‘prospective’ solution to WhatsApp issue; India stands firm on traceability, Business Today report; Livemint report.
  • [Sep 18] Chinese firms are driving the rise of AI surveillance across Africa, Quartz report.
  • [Sep 18] Documents reveal how Russia taps phone companies for surveillance, Tech Crunch report.
  • [Sep 20] WhatsApp traceability case petitioner asks court to remove Aadhaar from the plea, consider only ‘authorised govt proofs’, Medianama report; Inc42 report; Bar & Bench report.
  • [Sep 20] Chennai-based KPost says traceability is possible, wants to be impleaded in WhatsApp case, Medianama report.

Tech and National Security

  • [Sep 13] Pentagon’s former top hacker wants to inject some Silicon Valley into the defense industry, Defense One report.
  • [Sep 16] Here’s how startups are helping the Defence Ministry up its game, Money Control report.
  • [Sep 16] After 6 years in exile, Edward Snowden explains himself, Wired report.
  • [Sep 17] US tells Saudi Arabia oil attacks were launched from Iran, The Wall Street Journal report.
  • [Sep 17] Why Rafale jets may be inducted into IAF by next summer only, Livemint report.
  • [Sep 17] US Air Force to shift billions of dollars to network its weapons, Defense One report.
  • [Sep 18] India to achieve US$26 billion defence industry by 2025: Defence Minister, Business Standard report.
  • [Sep 18] Mitigating security risks from emerging technologies, Army Technology analysis.
  • [Sep 18] Revised draft defence procurement norms to be ready by November end, The Hindu report.
  • [Sep 20] The NSA is running a satellite hacking experiment, Defense One report.
  • [Sep 20] Army to host seminar on artificial intelligence next week; seeks to enhance lethality, The Economic Times report; India Today report; The New Indian Express report.
  • [Sep 20] Defence Procurement: Not a level playing field for private sector, PSUs still rule, Bharat Shakti report.
  • [Sep 20] Indian Air Force ‘accepts’ Rafale, formal hand over on Dussehra, Livemint report.
  • [Sep 22] Amid US-India blooming ties, Washington prepares to take down Indian air defence systems, EurAsian Times report.
  • [Sep 23] Government likely to order 36 more Rafale fighter jets, The Economic Times report.

Tech and Elections

  • [Sep 20] Social media companies raise concerns over Election Commission’s voluntary code of ethics, Medianama report.

Internal Security: J&K

  • [Sep 16] Supreme Court says normalcy to return to Kashmir but with national security in mind, India Today report.
  • [Sep 16] Farooq Abdullah booked under Public Safety Act, committee to decide duration of arrest: report, Financial Express report.
  • [Sep 17] Amnesty’s report on the (mis)use of Public Safety Act in J&K counters the govt’s narrative, Youth ki Awaaz report.
  • [Sep 18] China says Kashmir issue may not be a ‘major topic’ during Modi-Xi meet, Livemint report.
  • [Sep 19] In Pakistan-held Kashmir, growing calls for independence, The New York Times report.
  • [Sep 20] Kashmir residents say they are being charged by telcos despite no service, The Hindu report.
  • [Sep 20] UN Chief could discuss Kashmir issues at UNGA: UN spokesman, The Economic Times report.
  • [Sep 20] How military drones are becoming deadly weapons across the globe, The Economic Times report.
  • [Sep 22] Modi’s Digital India comes crashing down in Kashmir’s longest ever internet gag, The Wire report; The Hindu report.
  • [Sep 23] No clampdown in Kashmir, only communication line of terrorists stopped: Army Chief Bipin Rawat, India Today report.

Internal Security: NRC

  • [Sep 16] Those declared foreigners cannot file NRC appeal, say Assam govt, Hindustan Times report.
  • [Sep 18] NRC in Haryana, The Tribune report.
  • [Sep 18] NRC is an internal exercise, sovereign right of a country: EAM Jaishankar, Outlook report.
  • [Sep 18] Government will implement NRC across the country: Amit Shah, The Economic Times report.; Times of India report.
  • [Sep 21] NRC Officials issue public advisory against collection of identification documents, Guwahati Plus report.
  • [Sep 22] NRC-exluded Gurkhas not to approach foreigners’ Tribunals, seek empowered panel, The Hindu report; Times of India report.
  • [Sep 14] Final Assam NRC list, with 1.9 million exclusions, published online, Hindustan Times report.

National Security Law

  • [Sep 17] Pulwama to Aug 5: Delhi HC indicted govt for PSA arrests – in 80 pc cases, Financial Express report.
  • [Sep 16] What is the Public Safety Act under which Farooq Abdullah has been detained? News Nation report.
  • [Sep 16] 52 years on, still no sign of national defence university, The Times of India report.
  • [Sep 16] NSA Doval gets national security, foreign policy as PMO defines roles of top officials, The Asian Age report.

Big Tech

  • [Sep 15] Facebook VP Nick Clegg says India’s policies will decide the fate of the internet, Financial Express report.
  • [Sep 17] Facebook Establishes Structure and Governance for an Independent Oversight Board, Facebook Newsroom announcement; Medianama report.
  • [Sep 19] Facebook expands definition of terrorist organization to limit extremism, The New York Times report.
  • [Sep 22] Facebook is experimenting ith AI that lets you digitally get dressed, The Source report.
  • [Sep 23] Google braces for landmark global privacy ruling, Bloomberg report.


  • [Sep 16] 5G spectrum auction this year or in early 2020: Telecom Minister RS Prasad, Medianama report.
  • [Sep 20] TRAI opens consultation process for mergers and transfers in telecom sector, Medianama report.
  • [Sep 23] Indian masses have to wait 5-6 years to get true 5G experience, ET Telecom report.

More on Huawei

  • [Sep 17] Facing US ban, Huawei emerging as stronger tech competitor, The Hindu Business Line report, The Diplomat report.
  • [Sep 18] Huawei’s big test will be trying to sell a device with no Google apps outside China, Quartz report.
  • [Sep 18] Huawei users at risk as US blacklist cuts access to shared data on new cyber threats, Forbes report.
  • [Sep 20] Huawei makes sizeable 5G progress, bags 60 contracts: Ken Hu, The Economic Times report.
  • [Sep 21] Huawei unveils 5G training center in UK, ET Telecom report.

AI and Emerging Tech

  • [Sep 14] Artificial intelligence only goes so far in today’s economy, says MIT study, Forbes report.
  • [Sep 16] The US Govt will spend $1 bn on AI next year – not counting the Pentagon, Defense One report.
  • [Sep 18] Facial recognition systems to debut at Pune airport by 2020: report, Medianama report.
  • [Sep 18] AI stats news: AI is actively watching you in 75 countries, Forbes report.
  • [Sep 18] The Intel community ants to identify people from hundreds of yards away, Defense One report.
  • [Sep 19] Google setting up AI lab ‘Google Research India’ in Bengaluru, Entrackr report.
  • [Sep 20] India is planning a huge China-style facial recognition program, The Economic Times report.

Opinions and Analyses

  • [Sep 15] Nitin Pai, Livemint, The geopolitical profile of India tracks the economy’s trajectory.
  • [Sep 16] Paul Ravindranath, Tech Circle, Inclusion in technology is a compelling economic and business case.
  • [Sep 16] Markandey Katju, The Hindu, The litmus test for free speech.
  • [Sep 16] Vishal Chawla, Analytics India Magazine, What India can take away from Google’s settlement on employees’ freedom of expression.
  • [Sep 16] Editorial, Times of India, All talk: Fate of national defence university shows apathy towards defence modernisation.
  • [Sep 16] Jeff Hussey, Forbes, The gap between strong cybersecurity and demands for connectivity is getting massive.
  • [Sep 16] Kai Sedgwick, Bitcoin.com, How crypto became a gamblers paradise.
  • [Sep 17] Ajai Shukla, Business Standard, In picking strategic partners, the defence ministry isn’t spoilt for choice.
  • [Sep 17] Anthony Pfaff, Defense One, The Saudi-Oil attacks aren’t game changing. The Show how the Game has changed.
  • [Sep 17] Kayla Matthews, Security Boulevard, Who’s financially responsible for cybersecurity breaches?
  • [Sep 17] Anirudh Gotety, ET Markets, Check crypto trade, ban won’t help.
  • [Sep 17] PS Ahluwalia, Livemint, Rafale will add heft to IAF’s deterrence capabilities.
  • [Sep 17] Lorand Laksai, Privacy International, How China is supplying surveillance technology and training around the world.
  • [Sep 18] Tabish Khair, The Hindu, In Kashmir, shaking the apple tree.
  • [Sep 18] Catrin Nye, BBC News, Live facial recognition surveillance ‘must stop’ .
  • [Sep 18] Privacy International, the EU funds surveillance around the world: here’s what must be done about it.
  • [Sep 18] Joshua P Meltzer and Cameron F. Kerry, Brookings Institution, Cybersecurity and digital trade: Getting it right.
  • [Sep 19] Lt Gen HS Panag, The Print, Amit Shah’s political aim to recover PoK is not backed by India’s military capacity.
  • [Sep 20] Rifat Fareed, Al Jazeera, Farooq Abdullah’s arrest leaves India with few allies in Kashmir.
  • [Sep 22] Air Marshal (retd) M Matheswaran, Deccan Herald, Time for structural reforms, modernisation.

Call for Applications – Civil Liberties

Update: Deadline to apply extended to January 15, 2018! 

The Centre for Communication Governance at the National Law University Delhi (CCG) invites applications for research positions in its Civil Liberties team on a full time basis.

About the Centre

The Centre for Communication Governance is the only academic research centre dedicated to working on the information law and policy in India and in a short span of four years has become a leading centre on information policy in Asia. It seeks to embed human rights and good governance within communication policy and protect digital rights in India through rigorous academic research and capacity building.

The Centre routinely works with a range of international academic institutions and policy organizations. These include the Berkman Klein Center at Harvard University, the Programme in Comparative Media Law and Policy at the University of Oxford, the Center for Internet and Society at Stanford Law School, Hans Bredow Institute at the University of Hamburg and the Global Network of Interdisciplinary Internet & Society Research Centers. We engage regularly with government institutions and ministries such as the Law Commission of India, Ministry of Electronics & IT, Ministry of External Affairs, the Ministry of Law & Justice and the International Telecommunications Union. We work actively to provide the executive and judiciary with useful research in the course of their decision making on issues relating to civil liberties and technology.

CCG has also constituted two advisory boards, a faculty board within the University and one consisting of academic members of our international networks. These boards will oversee the functioning of the Centre and provide high level inputs on the work undertaken by CCG from time to time.

About Our Work

The work at CCG is designed to build competence and raise the quality of discourse in research and policy around issues concerning civil liberties and the Internet, cybersecurity and global Internet governance. The research and policy output is intended to catalyze effective, research-led policy making and informed public debate around issues in technology and Internet governance.

The work of our civil liberties team covers the following broad areas:

  1. Freedom of Speech & Expression: Research in this area focuses on human rights and civil liberties in the context of the Internet and emerging communication technology in India. Research on this track squarely addresses the research gaps around the architecture of the Internet and its impact on free expression.
  2. Access, Markets and Public Interest: The research under this area will consider questions of access, including how the human right to free speech could help to guarantee access to the Internet. It would identify areas where competition law would need to intervene to ensure free, fair and human rights-compatible access to the Internet, and opportunities to communicate using online services. Work in this area will consider how existing competition and consumer protection law could be applied to ensure that freedom of expression in new media, and particularly the internet, is protected given market realities on the supply side. We will under this track put out material regarding the net neutrality concerns that are closely associated to the competition, innovation, media diversity and protection of human rights especially rights to free expression and the right to receive information and particularly to substantive equality across media. It will also engage with existing theories of media pluralism in this context.
  3. Privacy, Surveillance & Big Data: Research in this area focuses on surveillance as well as data protection practices, laws and policies. The work may be directed either at the normative questions that arise in the context of surveillance or data protection, or at empirical work, including data gathering and analysis, with a view to enabling policy and law makers to better understand the pragmatic concerns in developing realistic and effective privacy frameworks. This work area extends to the right to be forgotten and data localization.


CCG is a young and continuously evolving organization and the members of the centre are expected to be active participants in building a collaborative, merit led institution and a lasting community of highly motivated young researchers.

Selected applicants will ordinarily be expected to design and produce units of publishable research with Director(s)/ senior staff members. They will also be recommending and assisting with designing and executing policy positions and external actions on a broad range of information policy issues.

Equally, they will also be expected to participate in other work, including writing opinion pieces, blog posts, press releases, memoranda, and help with outreach. The selected applicants will also represent CCG in the media and at other events, roundtables, and conferences and before relevant governmental, and other bodies. In addition, they will have organizational responsibilities such as providing inputs for grant applications, networking and designing and executing Centre events.


The Centre welcomes applications from candidates with advanced degrees in law, public policy and international relations.

  • All candidates must preferably be able to provide evidence of an interest in human rights / technology law and / or policy / Internet governance/ national security law as well. In addition, they must have a demonstrable capacity for high-quality, independent work.
  • In addition to written work, a project/ programme manager within CCG will be expected to play a significant leadership role. This ranges from proactive agenda-setting to administrative and team-building responsibilities.
  • Successful candidates for the project / programme manager position should show great initiative in managing both their own and their team’s workloads. They will also be expected to lead and motivate their team through high stress periods and in responding to pressing policy questions.

However, the length of your resume is less important than the other qualities we are looking for. As a young, rapidly-expanding organization, CCG anticipates that all members of the Centre will have to manage large burdens of substantive as well as administrative work in addition to research. We are looking for highly motivated candidates with a deep commitment to building information policy that supports and enables human rights and democracy.

At CCG, we aim very high and we demand a lot of each other in the workplace. We take great pride in high-quality outputs and value individuality and perfectionism. We like to maintain the highest ethical standards in our work and workplace, and love people who manage all of this while being as kind and generous as possible to colleagues, collaborators and everyone else within our networks. A sense of humour will be most welcome. Even if you do not necessarily fit requirements mentioned in the two bulleted points but bring to us the other qualities we look for, we will love to hear from you.

[The Centre reserves the right to not fill the position(s) if it does not find suitable candidates among the applicants.]


Based on experience and qualifications, successful applicants will be placed in the following positions. Please note that our interview panel has the discretion to determine which profile would be most suitable for each applicant.

  • Programme Officer (2-4 years’ work experience)
  • Project Manager (4-6 years’ work experience)
  • Programme Manager (6-8 years’ work experience)

A Master’s degree from a highly regarded programme might count towards work experience.

CCG staff work at the Centre’s offices at National Law University Delhi’s campus. The positions on offer are for duration of one year and we expect a commitment for two years.


The salaries will be competitive, and will usually range from ₹50,000 to ₹1,20,000 per month, depending on multiple factors including relevant experience, the position and the larger research project under which the candidate can be accommodated.

Where candidates demonstrate exceptional competence in the opinion of the interview panel, there is a possibility for greater remuneration.

Procedure for Application

Interested applicants are required to send the following information and materials by December 30, 2017 to ccgcareers@nludelhi.ac.in.

  1. Curriculum Vitae (maximum 2 double spaced pages)
  2. Expression of Interest in joining CCG (maximum 500 words).
  3. Contact details for two referees (at least one academic). Referees must be informed that they might be contacted for an oral reference or a brief written reference.
  4. One academic writing sample of between 1000 and 1200 words (essay or extract, published or unpublished).

Shortlisted applicants may be called for an interview.


Response to Online Extremism: Beyond India

In our previous posts, we traced the Indian response to online extremism as well as the alternate regulatory methods adopted worldwide to counter extremist narratives spread via the internet. At the international level, the United Nations has emphasised upon the need to counter extremists who use the internet for propaganda and recruitment. This post explores the responses of three countries – UK, France and USA – that have often been the target of extremism. While strategies to counter extremism form part of larger counter-terror programmes, this post focuses on some measures adopted by these States that target online extremism specifically.

United Kingdom

In 2011, the UK adopted a ‘prevent strategy’ which seeks to ‘respond to the ideological challenge’ posed by terrorism and ‘prevent people from being drawn into terrorism’. This strategy seeks to counter ‘extremism’ which is defined as:

“vocal or active opposition to fundamental British values, including democracy, the rule of law, individual liberty and mutual respect and tolerance of different faiths and beliefs. We also include in our definition of extremism calls for the death of members of our armed forces”.

This definition has been criticised as being over-broad and vague, which can potentially ‘clamp-down on free expression’. In 2013, the Prime Minister’s Task Force on Tackling Radicalisation and Extremism (“Task Force”) submitted its report identifying the critical issues in tackling extremism and suggesting steps for the future. The Task Force recommended that the response to extremism must not be limited to dealing with those who promote violence – rather, it must target the ideologies that lead individuals to extremism. The report highlighted the need to counter extremist narratives, especially online. Some of its recommendations include building capabilities, working with Internet companies to restrict access to such material, improving the process for public reporting of such content and including extremism as a filter for content accessed online. The report also recommended the promoting of community integration and suggested steps to prevent the spread of extremist narratives in schools and institutions of higher education. While suggesting these methods, the report reaffirmed that the proposals are not designed to ‘restrict lawful comment or debate’.

A number of recommendations made by the Task Force have been adopted in the UK subsequently. For instance, the UK Government has set up a mechanism by which individuals can anonymously report online material promoting terrorism or extremism. Universities and colleges became legally bound to put in place policies to prevent extremist radicalization on campuses in 2015. Further, local authorities, the health sector, prisons and the police have all been accorded duties to aid in the fight against extremism.

UK is also considering a Counter-Extremism and Safeguarding Bill (the “Bill”) which proposes to bring in tougher counter extremism measures. The Bill empowers certain authorities to ban extremist groups, disrupt individuals engaging in extremist behaviour and close down premises that support extremism. However, the Bill has been criticised extensively by the Parliament’s Joint Committee on Human Rights. The Committee identified gaps such as the failure to adequately define core issues like ‘non-violent extremism’ and the use of measures like ‘banning orders’ which are over-broad and susceptible to misuse.


Reports reveal that France has become the largest source of Western fighters for the Islamic State and nearly 9000 radicalized individuals are currently residing in France. Over the last few years, France has also witnessed a series of terrorist attacks, which has resulted in bolstering of the counter-terrorism and counter-extremism measures by the country.

In November 2014, the French parliament passed an anti-terror legislation that permits the government to block websites that ‘glorify terrorism’ and censor speech that is deemed to be an ‘apology for terrorism’, among other measures. A circular released in January 2015 explains that “apology for terrorism” refers to acts which present or comment on instances of terrorism “while basing a favourable moral judgement on the same”.  In 2015, France blocked five websites, in one of the first instances of censoring anti-jihadist content. Since then, France has continued to censor online speech for the broad offence of ‘apology for terrorism’ with harsh penalties. It has been reported that nearly 87 websites were blocked between January to November 2015; and more than 700 people have been arrested under this new offence of ‘apology for terrorism’. The offence has been criticised for being vague, resulting in frequent prosecution of legitimate speech that does not constitute incitement to violence. In May 2015, another law was passed strengthening the surveillance powers of the State requiring Internet Service Providers to give unfettered access to intelligence agencies. This statute empowers authorities to order immediate handover of user data without prior court approvals. These legislations have been criticised for being over-broad and incorporating measures that are unnecessary and excessive.

In addition to these measures, France also launched an anti-Jihadism campaign in 2015 which seeks to counter extremism and radicalization throughout the society, specifically focusing on schools and prisons.

United States

The principle institution that develops counter-extremism strategies in the USA is the Bureau of Counterterrorism and Countering Violent Extremism. The bureau has developed a Department of State & USAID Joint Strategy on Countering Violent Extremism. The strategy aims to counter efforts by extremist to radicalize, recruit and mobilize followers to violence. To pursue this aim, the strategy incorporates measures like enhanced bilateral and multilateral diplomacy, strengthening of the criminal justice system and increased engagement with different sectors like prisons, educational institutions and civil society. Promoting alternate narratives is a key component of the counter-extremism programme of the bureau.  However, it is important to note that this strategy has also been criticised for revealing very few details about what it entails, despite extensive budget allocations. A lawsuit has been filed under the Freedom of Information Act claiming that authorities have denied revealing information about this programme. Organisations fear that the initiatives under the programme have the potential of criminalizing legitimate speech and targeting certain communities.


State responses towards extremism have increased substantially in the past few years with new programmes and measures being put in place to counter these narratives in the fight against terrorism. While the measures adopted differ from state to state – some strategies like promoting de-radicalisation in educational institutions and prisons are commonly present. At the same time, some of the measures adopted threaten to impact freedom of speech due to vague definitions and over-broad responses. It is critical for authorities to strike a balance between countering extremist narratives and preserving free thought and debate, more so in institutions of learning. Consequently, measures to counter extremist narratives must be specific and narrowly tailored with sufficient safeguards in order to balance the right to security with civil liberties of individuals.

Speaking Out Against Online Extremism: Counter-Speech and its Effectiveness

By Arpita Biswas


This post is a part of a series on online extremism, where we discuss the regulatory and legal issues surrounding the growing problem. This current post focuses counter-speech, which is one of the regulatory techniques.

What is Counter Speech?

Counter-speech or counter narratives in content of extremism have been defined as “messages that offer a positive alternative to extremist propaganda, or alternatively aim to deconstruct or delegitimise extremist narratives”.

This definition has been broken down into three categories to explain the different approaches:

a) Counter speech that is intended to negate extremist speech.

b) Counter speech focussed on positive narratives.

Later on in the post, we will discuss an initiative which addresses issues faced by young Muslims related to cultural identity. This narrative does not necessarily focus on distilling biases, rather initiating discussions on related issues.

c) Informative counter-speech. This narrative focuses on distilling extremist propaganda. Unlike the first category, this category intends to negate misinterpretations perpetuated by the extremists. This is usually related to organizations or individuals in the public eye.

For the purposes of this post, counter-speech is limited to counter-narratives on online platforms. Speech is however not limited to text messages or videos and can extend to various other mediums, like the FBI’s interactive game ‘Don’t Be a Puppet’.

Why Counter-Speech?

The United Nations Security Council in May 2016 discussed the necessity of an international framework to combat online extremism. During the meeting, the dangers of extremists exploiting social media platforms and the possible remedies that should follow were discussed. The discussion stressed on the need to ‘safeguard the freedom of the press’ by not resorting to excessive censorship. The forthcoming international framework could benefit from utilizing counter speech, asa viable alternative to censorship.

Using counter speech or employing counter narratives to fight online extremism might subvert the criticism faced by other anti-extremist measures. As discussed in our previous post, internal regulation and state controlled regulation both run the risk of ‘over-censorship’.

Counter-speech strategy would not rely on ‘taking down’ content. Taking down or blocking access to content only acts as a momentary relief, since the same content can crop up anywhere else. In some instances, when extremist accounts on Twitter and WhatsApp were taken down, new accounts emerged shortly or propaganda moved to encrypted platforms.

The UN Special Rapporteur on Freedom of Expression stated that “repressive approaches would have the reverse effect of reinforcing the narrative of extremist ideologies”

In addition, counter-speech would treat the root cause of online extremism, indoctrination. The UN Special Rapporteur also stated that the ‘blocking websites’ would not be the right approach and “strategies addressing the root causes of such viewpoints” should be addressed.

A platform which allows open discussions or debates about beliefs might lead to a more effective anti-extremism regime.

Organizations utilizing counter speech

The United States government has initiated a few counter-speech programmes. The Bureau of Counter terrorism and Countering Violent Extremism has introduced initiatives like the ‘Think again turn away’ campaign. This campaign focuses on spreading counter-narratives on YouTube, Twitter and other such platforms. The Federal Bureau of Investigation (FBI) has launched an interactive game to sensitize people on the dangers of extremism. ‘Don’t be a puppet’ aims to educate young people on questions like ‘What are known violent extremist groups?’ and ‘How do violent extremists make contact?’.

There are several counter-speech initiatives, being operated by private bodies.  A few, namely ExitUSA and Average Mohamed have been studied by the Institute for Strategic Dialogue. ExitUSA produces videos intended for ‘white supremacists’. Their approach is informative and intends to negate popular extremist propaganda. Average Mohamed is an initiative for young Somalians in the United States. Among the videos produced by them, a few, titled ‘Identity in Islam’ and a ‘Muslim in the West’ intend to address other cultural issues faced by young Muslims. Through their animated videos surrounding protagonist ‘Average Mohamed’, a young boy in the United States, they initiate positive counter-speech among viewers.

Speech Gone Wrong- Shortcomings of Counter-Speech

The previously mentioned ‘Don’t be a puppet’ initiative has been criticized for employing bigoted narratives themselves. Their counter-narrative has been criticized for being anti-Islamic.

In addition to claims of bigotry, a few of the government led initiatives have also been criticized for being opaque. Earlier this year, the White House organized a summit on Countering Violent Extremism (CVE), during which multi-million dollar plans were initiated. Following the summit, a Senate Sub-committee was instituted along and a sizeable proportion of the 2017 fiscal budget was allocated to CVE. However, lawsuits have been filed under the Freedom of Information Act, demanding details about the initiatives.

More importantly, the impact or success of counter-speech has not been substantiated. In the ISD study for instance, the researchers have stated that determining the success or outcome of counter-speech initiatives is “extremely difficult”. Faced with limitations, their methodology is based onthe ‘sustained engagement’ they had with the users. This engagement was measured by comments, tweets and messages exchanged between the counter-speech organization and the user.

Lastly, referring back to our previous post, some private organizations have also removed content under the guise of counter-speech. Facebook in collaboration with the Online Civil Courage Initiative (OCCI) vowed to employ counter-speech online, stating that it was more effective than censorship. However, as evidenced from OCCI’s manual, the organization was allowed to takedown ‘antagonistic’ content, leading to censorship.

Future of Counter Speech

While counter-speech suffers from fewer setbacks as compared to other regulatory techniques, it needs more transparency to function better. As of now, there are no universally applicable guidelines for counter-speech. Guidelines and rules could help establish transparency and avoid instances of censorship or bigotry from disseminating.

Arpita Biswas is a Programme Officer at the Centre for Communication Governance at National Law University Delhi

Indian Response to Online Extremism

The United Nations General Assembly resolution adopted in July 2016 highlights the need to counter extremist narratives online. In the recent past, extremist content, usually content aiding terrorist activity has become a global concern. This post examines the methods adopted by state authorities and private entities to counter such online extremism.


The response to growing use of the Internet to spread messages of terror and hate has been an increase in the censorship of online content. In a reply to the Lok Sabha on the use of social media to spread terrorism, the government acknowledged that the ‘potential for spread of terror through social media was higher than ever.’  The government highlighted in its reply that it restricts the spread of terrorism on social media by taking prompt action to block content and by regularly monitoring social media sites with the help of security / Intelligence agencies. Additionally, it highlighted that intermediaries were prohibited from hosting objectionable or unlawful content as per the intermediary guidelines.

Blocking access to websites/URLs has been used frequently by the government to suppress extremist narratives online. In December 2014, Internet Service Providers were directed to block 32 websites which included file-sharing websites like ‘Vimeo’, ‘Dailymotion’, web-hosting services like ‘Weebly’ and software code repositories like ‘Github’ and ‘Sourceforge’. Reportedly, the block was based on an advisory issued by the Anti-terrorism squad (ATS) that the sites were hosting anti-India content relating to ISIS.  Subsequently, some of these websites were unblocked, after they signed an undertaking stating that they would not allow such propaganda information to be hosted and will work with the Government to remove such content. Further, in January 2016, the chief of ATS disclosed that 94 websites linked to the ISIS had been blocked in 2015 as well. In February 2016, the Government blockedJihadology.net’, an online academic repository of Jihadist primary source material, their analysis and translations of documents. The website continues to remain blocked. The blocking of websites that host legitimate content like ‘Jihadology.com’ or ‘Vimeo’ indicates that these blocking orders are rarely executed in a targeted manner, and often end up being over-broad.

Public access to websites is blocked under Section 69A of the Information Technology Act, 2000 and the rules framed under it.  Blocking of websites takes place when ‘nodal officers’ appointed by government agencies send in requests for blocking of access to information or in case of court ordered block. These requests are reviewed by a committee and a ‘designated officer’ who chairs the committee issues approved orders of blocking of access to the service providers.  This procedure was upheld by the Supreme Court in Shreya Singhal v Union of India. The Court also held that the procedure to block websites required written reasons for blocking to be stated in the order issued by the designated officer, as well as the right to a pre-decisional hearing. However, these safeguards are seldom followed. Further, the blocking process continues to be shrouded in secrecy. The blocking rules require strict confidentiality to be maintained regarding any request or complaint received and any action taken by the government to subsequently block websites. This lack of transparency, absent or insufficient reasons and over-broad orders pose a continuous threat to the freedom of expression.

However, the state response has not been limited to censorship. It was reported that the ATS, Maharashtra would soon be launching its own website to propagate a counter-narrative. Further, the ATS chief disclosed that the police would also attempt to de-radicalise the youth. Earlier in February 2015, the Maharashtra ATS also began an intervention programme in educational institutions to initiate dialogue with the youth and prevent radicalisation. In October 2016, it was reported that due to a growing concern about radicalisation of youth by terrorist outfits like ISIS, the Ministry of Home Affairs (MHA) appointed an advisor on cyber and social media. The adviser will work with MHA to develop strategies to track and counter radicalisation on social media.


The Indian Government is not alone in its concerns about online extremism. CCG has traced the global response and alternate regulatory methods adopted by private parties here. In India, private corporations like Facebook have also responded to online extremism.

Since July 2016, India has witnessed wide-spread censorship in Kashmir ranging from suppression of newspapers to Internet shutdowns. This is in response to the ongoing protests against the death of Burhan Wani, commander of Hizbul Mujahideen. The censorship ranges from suppression of newspapers to Internet shutdowns. Amidst this, Facebook has also blocked accounts and taken down content regarding Kashmir from across the globe for violation of its ‘community standards’ that prohibit content that praises or supports terrorists. For instance, Tomoghna Halder, a student from the University of California, has been repeatedly blocked from posting when he uploaded pictures of graffiti on Kashmir.  Similarly, a video, posted by a local daily, featuring separatist leader Syed Ali Geelani’s arrest was removed and a Kashmir based satire page ‘Jajeer Talkies’ was also blocked. In another instance of private censorship, Facebook disabled the account of activist and lecturer Huma Dar for her pro-Burhan posts. It is clear that in the wake of the conflict in Kashmir, Facebook has resorted to privatized censorship to curb what it deems as ‘extremist’ reactions.

Private censorship of what is categorised as ‘extremist’ content is not unique to Facebook. Since February 2016, Twitter has suspended 235,000 accounts globally for violation of their policies that prohibit promotion of terrorism. Reports also indicate that YouTube and Facebook will use automation to silently block extremist videos. This form of private censorship raises a host of concerns. The chilling effect on speech and over-blocking of content are foremost among these.  However, the determination of which content ‘praises or supports terrorists’ by Facebook and other intermediaries, raises larger concerns. It permits Facebook to clampdown on alternate voices, as is evident from instances related to the Kashmir conflict.  Consequently, by the exercise of this power, Facebook – and other intermediaries- acquire the ability to influence the online narrative on these issues. This poses a severe threat to freedom of speech and expression.


The response of the Indian State to online extremism has been censorship of content. This has also been accompanied by censorship of content by private players. Presently, the use of counter-speech as an effective tool in countering the extremist narratives is limited, though not absent. However, there remains little evaluation of what is ‘online extremism’ and under what circumstances such content should be limited. Due to the opaque system of blocking websites, the State is able to limit judicial scrutiny. Further, there is an absence of an effective remedy in instances of private censorship. There are few avenues available to users in case of wrongful takedowns. The absence of an effective policy has led to frequent over-blocking and silencing of alternate voices. This underscores the need to examine what constitutes ‘online extremism’ and the most effective mechanisms to counter it.

Online Extremism and Hate Speech – A Review of Alternate Regulatory Methods

By Arpita Biswas


Online extremism and hate speech on the internet are growing global concerns. In 2015, the EU signed a code of conduct with social media companies including Facebook, Google and Twitter to effectively regulate hate speech on the internet. The code, amongst other measures, discussed stricter sanctions on intermediaries (social media companies) in the form of a ‘notice and takedown’ regime, a practice which has been criticised for effectively creating a ‘chilling’ effect and leading to over-censorship.

While this system is still in place, social media companies are attempting to adopt alternative regulatory methods. If companies could ensure that they routinely track their websites for illegal content, before government notices are issued, this could save them time and money. This post will attempt to offer some insight into alternative modes of regulation used by social media companies.

 YouTube Heroes – Content Regulation by Users

YouTube Heroes was launched in September, 2016 with the aim of efficiently regulating content. Under this initiative, YouTube users are allowed to ‘mass-flag’ content that goes against the Community Guidelines. The Community Guidelines specifically prohibit instances of hate speech. As per the Guidelines, content that “promotes violence or hatred against individuals based on certain attributes would amount to hate speech”. These ‘attributes’ include but are not limited to race, gender and religion.

‘Mass-flagging’ is just one of the many tools available to a YouTube Hero. The system is based on points and ranks, with users generating points for helping translate videos and for flagging inappropriate content. As they climb up the ranking system, users become privy to exclusive deals, like the ability to directly contact YouTube staff. ‘Mass-flagging’ is in essence the same as flagging a video, an option that YouTube already offered. However, the incentive of gaining access to private moderator forums and YouTube staff could lead to users flagging videos for extraneous reasons. While ‘mass-flagged’ videos are reviewed by YouTube moderators before being taken down, the initiative has still raised concerns.

It has been criticised for giving free rein to users, who may flag content because of personal biases, leading to ‘harassment campaigns’. Popular YouTube users have panned YouTube heroes, apprehending the possibility of their videos being targeted by ‘mobs’. Despite the review system in place, users have also expressed doubts about YouTube’s ability to accurately take down flagged content. Since the initiative is in its testing stage, it is difficult to determine what its outcome could be.

Facebook’s Online Civil Courage Initiative – Counter Speech

Governmental authorities across the world have been attempting to curb hate speech and online extremism in myriad ways. For instance, in November, 2015, an investigation involving one of Facebook’s European Managing Directors was launched. The Managing Director was accused of letting Facebook host hate speech. As the investigation drew to an end, Facebook representatives were not implicated. However, this investigation marked an increase in international pressure to effectively deal with hate speech.

Due to growing pressure from governmental authorities, Facebook began to  ‘outsource’ content removal.  In January of 2016, a German company called ‘Arvato’, was delegated the task of reviewing and taking down reported content, along with Facebook’s Community Operations Team. There is limited public information on the terms of service or rules Arvato is bound by. In the absence of any such information, ‘outsourcing’ could contribute to a private censorship regime. With no public guidelines in place, the outsourcing process is not transparent or accountable.

Additionally, Facebook has been working with other private bodies to regulate content online. Early in 2016, Facebook, in partnership with several NGOs, launched the Online Civil Courage Initiative (OCCI) to combat online extremism with counter-speech.   COO Sheryl Sandberg said that ‘censorship’ would not put an end to hate speech and that counter-speech would be a far more effective mode of regulation. Under this initiative, civil societies and NGO’s are ‘rewarded’ with ad credits, marketing resources, and strategic supportfor countering speech online.

It is pertinent to note that the Information Pack on Counter Speech Engagement is the only set of guidelines made public by OCCI. These guidelines provide information to plan a counter speech campaign. An interesting aspect of the information pack is the section on ‘Responding and Engaging during a campaign’. Under this section, comments are categorised as ‘supportive, negative, constructive, antagonistic’. A table suggests how different categories of comments should be ‘engaged with’. Surprisingly, ‘antagonistic’ comments should be ‘ignored, hidden or deleted’.  The information pack does not attempt to define any of the above categories. These vaguely worded guidelines could lead to confusion amongst NGOs. While studies have shown that counter-speech might be the most effective way to deal with online extremism, OCCI would have to make major changes to reach the goals of the counter-speech movement.

In October 2016, Facebook has reportedly come under the radar again. A German Federal Minister has stated that Facebook was still not effectively dealing with hate speech targeted at refugees and another investigation might be in the pipeline.


 It is yet to be seen whether the alternative regulatory methods adopted by social media companies will effectively deal with hate speech and online extremism.

It is important to note that social media companies are ‘outsourcing’ internal regulation to private bodies or users (YouTube Heroes, Arvato and OCCI). These private bodies might amplify the problems being faced by the intermediary liability system, which could lead to ‘over-censorship’. The system has been criticised for its ‘notice and takedown’ regime. Non-compliance of these takedown orders would attract strict sanctions. Fear of these sanctions could lead intermediaries to takedown content which could be in grey areas, but are not illegal.

However, under the internal regulation method, social media companies will continue to function under the fear of state pressure. Private bodies like Arvato and NGOs in affiliation with OCCI will also regulate content, with the incentive of receiving ‘advertisement credit’ and ‘points’.  This could lead to over-reporting for the sake of incentives. Coupled with pressure from the state, this might lead to a ‘chilling’ effect.

In addition, some of these private bodies do not operate in a transparent manner. For instance, providing public information on Arvato’s content regulation activities and the guidelines they are bound by would help create a far more accountable system. Further, the OCCI needs to have clearer, well-defined policies to fulfill the objectives of disseminating counter-speech.

Arpita Biswas is a Programme Officer at the Centre for Communication Governance at National Law University Delhi



Free Speech & Violent Extremism: Special Rapporteur on Terrorism Weighs in

Written by Nakul Nayak

Yesterday, the Human Rights Council came out with an advance unedited version of a report (A/HRC/31/65) of the Special Rapporteur on protection of human rights while countering terrorism. This report in particular deals with protecting human rights while preventing and countering violent extremism. The Special Rapporteur, Ben Emmerson, has made some interesting remarks on extremist speech and its position in the hierarchy of protected and unprotected speech.

First, it should be noted that the Report tries to grapple with and distinguish between the commonly substituted terms “extremism” and “terrorism”. Noting that violent extremism lacks a consistent definition across countries and in some instances any definition at all, the Report goes on to liken it to terrorism. He also acknowledges the lack of understanding of the “radicalization process”, whereby innocent individuals become violent extremists. While the report does not suggest an approach to defining either term, it briefly contrasts the definitions laid down in various countries. However, there does seem to be some consensus on the ambit of violent extremism being broader than terrorism and consisting a range of subversive activities.

The important section of the Report, from the perspective of free speech, deals with incitement to violent extremism and efforts to counter it. The Report cites UN Resolution 1624(2005) that calls for the need to adopt legislative measures as effective means of addressing incitement to terrorism. However, the Report insists on the existence of “serious human rights concerns linked to the criminalization of incitement, in particular around freedom of expression and the right to privacy.[1] The Report then goes on to quote the UN Secretary General and the Special Rapporteur on Free Expression laying down various safeguards to laws criminalizing incitement. In particular, these laws must prosecute incitement that is directly related to terrorism, has the intention and effect of promoting terrorism, and includes judicial recourse, among other things.[2]

This gives us an opporutnity to discuss the standards of free speech restrictions in India. While the Supreme Court has expressly imported the American speech-protective standard of incitement to imminent lawless action in Arup Bhuyan, confusion still persists over the applicable standard in any justifying any restriction to free speech. The Supreme Court’s outdated ‘tendency’ test that does not require an intimate connection between speech and action still finds place in today’s law reports. This is evident from the celebrated case of Shreya Singhal. After a lengthy analysis of the public order jurisprudence in India and advocating for a direct connection between speech and public disorder, Justice Nariman muddies the water by examining section 66A of the IT Act under the ‘tendency’ test. Some coherence in incitement standards is needed.

The next pertinent segment of the Report dealt specifically with the impact of State measures on the restriction of expression, especially online content. Interestingly, the Report suggests that “Governments should counter ideas they disagree with, but should not seek to prevent non-violent ideas and opinions from being discussed.[3] This brings to mind the recent proposal of the National Security Council Secretariat (NSCS) seeking to set up a National Media Analytics Centre (NMAC) to counter negative online narratives through press releases, briefings, and conferences. While nothing concrete has come out, with the proposal still in the pipelines, safeguards must be implemented to assuage chilling effect and privacy concerns. It may be noted here that the Report’s remarks are limited to countering speech that form an indispensible part of the “radicalization process”. However, the NMAC covers negative content across the online spectrum, with its only marker being the “intensity or standing of the post”.

An important paragraph of the report- perhaps the gist of the free speech perspective in the combat of violent extremism- is the visible unease in determining the position of extremist speech glorifying and advocating terrorism. The Report notes the Human Rights Committee’s stand that terms such as “glorifying” terrorism must be clearly defined to avoid unnecessary incursions on free speech. At the same time, the “Secretary General has deprecated the ‘troubling trend’ of criminalizing glorification of terrorism, considering it to be an inappropriate restriction on expression.[4]

These propositions are in stark contrast to India’s terror legislation, the Unlawful Activities Prevention Act, 1967. Section 13 punishes anyone who “advocates, … advises … the commission of any unlawful activity …” An unlawful activity has been defined in section 2(o) to include speech acts that

  • supports a claim of “secession of a part of the territory of India from the Union” or,
  • which disclaims, questions … the sovereignty and territorial integrity of India” or,
  • rather draconically, “which causes … disaffection against India.

It will also be noted that all three offences are content-based restrictions on free speech i.e. limitations based purely on the subjects that the words deal in. Textually, these laws do not necessarily require an examination of the intent of the speaker, the impact of the words on the audience, or indeed the context in which the words are used.

Finally, the Report notes the views of the Special Rapporteur on Free Expression on hate speech and characterizing most efforts to counter them as “misguided”. However, the Report also “recognizes the importance of not letting hate speech go unchecked …” In one sense, the Special Rapporteur expressly rejects American First Amendment jurisprudence, which does not acknowledge hate speech as a permissible restriction to free speech. At the same time, the Report’s insistence that “the underlying causes should also be addressed” instead of being satisfied with mere prosecutions is a policy aspiration that needs serious thought in India.

This Report on violent extremism (as distinct from terrorism) is much-needed and timely. The strong human rights concerns espoused, with its attendant importance attached to a context-driven approach in prosecuting speech acts, are a sobering reminder about the many inadequacies of Indian terror law and its respect for fundamental rights.

Nakul Nayak was a Fellow at the Centre for Communication Governance from 2015-16.

[1] Para 24.

[2] Para 24.

[3] Para 38.

[4] Para 39.