Response to Online Extremism: Beyond India

In our previous posts, we traced the Indian response to online extremism as well as the alternate regulatory methods adopted worldwide to counter extremist narratives spread via the internet. At the international level, the United Nations has emphasised upon the need to counter extremists who use the internet for propaganda and recruitment. This post explores the responses of three countries – UK, France and USA – that have often been the target of extremism. While strategies to counter extremism form part of larger counter-terror programmes, this post focuses on some measures adopted by these States that target online extremism specifically.

United Kingdom

In 2011, the UK adopted a ‘prevent strategy’ which seeks to ‘respond to the ideological challenge’ posed by terrorism and ‘prevent people from being drawn into terrorism’. This strategy seeks to counter ‘extremism’ which is defined as:

“vocal or active opposition to fundamental British values, including democracy, the rule of law, individual liberty and mutual respect and tolerance of different faiths and beliefs. We also include in our definition of extremism calls for the death of members of our armed forces”.

This definition has been criticised as being over-broad and vague, which can potentially ‘clamp-down on free expression’. In 2013, the Prime Minister’s Task Force on Tackling Radicalisation and Extremism (“Task Force”) submitted its report identifying the critical issues in tackling extremism and suggesting steps for the future. The Task Force recommended that the response to extremism must not be limited to dealing with those who promote violence – rather, it must target the ideologies that lead individuals to extremism. The report highlighted the need to counter extremist narratives, especially online. Some of its recommendations include building capabilities, working with Internet companies to restrict access to such material, improving the process for public reporting of such content and including extremism as a filter for content accessed online. The report also recommended the promoting of community integration and suggested steps to prevent the spread of extremist narratives in schools and institutions of higher education. While suggesting these methods, the report reaffirmed that the proposals are not designed to ‘restrict lawful comment or debate’.

A number of recommendations made by the Task Force have been adopted in the UK subsequently. For instance, the UK Government has set up a mechanism by which individuals can anonymously report online material promoting terrorism or extremism. Universities and colleges became legally bound to put in place policies to prevent extremist radicalization on campuses in 2015. Further, local authorities, the health sector, prisons and the police have all been accorded duties to aid in the fight against extremism.

UK is also considering a Counter-Extremism and Safeguarding Bill (the “Bill”) which proposes to bring in tougher counter extremism measures. The Bill empowers certain authorities to ban extremist groups, disrupt individuals engaging in extremist behaviour and close down premises that support extremism. However, the Bill has been criticised extensively by the Parliament’s Joint Committee on Human Rights. The Committee identified gaps such as the failure to adequately define core issues like ‘non-violent extremism’ and the use of measures like ‘banning orders’ which are over-broad and susceptible to misuse.


Reports reveal that France has become the largest source of Western fighters for the Islamic State and nearly 9000 radicalized individuals are currently residing in France. Over the last few years, France has also witnessed a series of terrorist attacks, which has resulted in bolstering of the counter-terrorism and counter-extremism measures by the country.

In November 2014, the French parliament passed an anti-terror legislation that permits the government to block websites that ‘glorify terrorism’ and censor speech that is deemed to be an ‘apology for terrorism’, among other measures. A circular released in January 2015 explains that “apology for terrorism” refers to acts which present or comment on instances of terrorism “while basing a favourable moral judgement on the same”.  In 2015, France blocked five websites, in one of the first instances of censoring anti-jihadist content. Since then, France has continued to censor online speech for the broad offence of ‘apology for terrorism’ with harsh penalties. It has been reported that nearly 87 websites were blocked between January to November 2015; and more than 700 people have been arrested under this new offence of ‘apology for terrorism’. The offence has been criticised for being vague, resulting in frequent prosecution of legitimate speech that does not constitute incitement to violence. In May 2015, another law was passed strengthening the surveillance powers of the State requiring Internet Service Providers to give unfettered access to intelligence agencies. This statute empowers authorities to order immediate handover of user data without prior court approvals. These legislations have been criticised for being over-broad and incorporating measures that are unnecessary and excessive.

In addition to these measures, France also launched an anti-Jihadism campaign in 2015 which seeks to counter extremism and radicalization throughout the society, specifically focusing on schools and prisons.

United States

The principle institution that develops counter-extremism strategies in the USA is the Bureau of Counterterrorism and Countering Violent Extremism. The bureau has developed a Department of State & USAID Joint Strategy on Countering Violent Extremism. The strategy aims to counter efforts by extremist to radicalize, recruit and mobilize followers to violence. To pursue this aim, the strategy incorporates measures like enhanced bilateral and multilateral diplomacy, strengthening of the criminal justice system and increased engagement with different sectors like prisons, educational institutions and civil society. Promoting alternate narratives is a key component of the counter-extremism programme of the bureau.  However, it is important to note that this strategy has also been criticised for revealing very few details about what it entails, despite extensive budget allocations. A lawsuit has been filed under the Freedom of Information Act claiming that authorities have denied revealing information about this programme. Organisations fear that the initiatives under the programme have the potential of criminalizing legitimate speech and targeting certain communities.


State responses towards extremism have increased substantially in the past few years with new programmes and measures being put in place to counter these narratives in the fight against terrorism. While the measures adopted differ from state to state – some strategies like promoting de-radicalisation in educational institutions and prisons are commonly present. At the same time, some of the measures adopted threaten to impact freedom of speech due to vague definitions and over-broad responses. It is critical for authorities to strike a balance between countering extremist narratives and preserving free thought and debate, more so in institutions of learning. Consequently, measures to counter extremist narratives must be specific and narrowly tailored with sufficient safeguards in order to balance the right to security with civil liberties of individuals.


Speaking Out Against Online Extremism: Counter-Speech and its Effectiveness


This post is a part of a series on online extremism, where we discuss the regulatory and legal issues surrounding the growing problem. This current post focuses counter-speech, which is one of the regulatory techniques.

What is Counter Speech?

Counter-speech or counter narratives in content of extremism have been defined as “messages that offer a positive alternative to extremist propaganda, or alternatively aim to deconstruct or delegitimise extremist narratives”.

This definition has been broken down into three categories to explain the different approaches:

a) Counter speech that is intended to negate extremist speech.

b) Counter speech focussed on positive narratives.

Later on in the post, we will discuss an initiative which addresses issues faced by young Muslims related to cultural identity. This narrative does not necessarily focus on distilling biases, rather initiating discussions on related issues.

c) Informative counter-speech. This narrative focuses on distilling extremist propaganda. Unlike the first category, this category intends to negate misinterpretations perpetuated by the extremists. This is usually related to organizations or individuals in the public eye.

For the purposes of this post, counter-speech is limited to counter-narratives on online platforms. Speech is however not limited to text messages or videos and can extend to various other mediums, like the FBI’s interactive game ‘Don’t Be a Puppet’.

Why Counter-Speech?

The United Nations Security Council in May 2016 discussed the necessity of an international framework to combat online extremism. During the meeting, the dangers of extremists exploiting social media platforms and the possible remedies that should follow were discussed. The discussion stressed on the need to ‘safeguard the freedom of the press’ by not resorting to excessive censorship. The forthcoming international framework could benefit from utilizing counter speech, asa viable alternative to censorship.

Using counter speech or employing counter narratives to fight online extremism might subvert the criticism faced by other anti-extremist measures. As discussed in our previous post, internal regulation and state controlled regulation both run the risk of ‘over-censorship’.

Counter-speech strategy would not rely on ‘taking down’ content. Taking down or blocking access to content only acts as a momentary relief, since the same content can crop up anywhere else. In some instances, when extremist accounts on Twitter and WhatsApp were taken down, new accounts emerged shortly or propaganda moved to encrypted platforms.

The UN Special Rapporteur on Freedom of Expression stated that “repressive approaches would have the reverse effect of reinforcing the narrative of extremist ideologies”

In addition, counter-speech would treat the root cause of online extremism, indoctrination. The UN Special Rapporteur also stated that the ‘blocking websites’ would not be the right approach and “strategies addressing the root causes of such viewpoints” should be addressed.

A platform which allows open discussions or debates about beliefs might lead to a more effective anti-extremism regime.

Organizations utilizing counter speech

The United States government has initiated a few counter-speech programmes. The Bureau of Counter terrorism and Countering Violent Extremism has introduced initiatives like the ‘Think again turn away’ campaign. This campaign focuses on spreading counter-narratives on YouTube, Twitter and other such platforms. The Federal Bureau of Investigation (FBI) has launched an interactive game to sensitize people on the dangers of extremism. ‘Don’t be a puppet’ aims to educate young people on questions like ‘What are known violent extremist groups?’ and ‘How do violent extremists make contact?’.

There are several counter-speech initiatives, being operated by private bodies.  A few, namely ExitUSA and Average Mohamed have been studied by the Institute for Strategic Dialogue. ExitUSA produces videos intended for ‘white supremacists’. Their approach is informative and intends to negate popular extremist propaganda. Average Mohamed is an initiative for young Somalians in the United States. Among the videos produced by them, a few, titled ‘Identity in Islam’ and a ‘Muslim in the West’ intend to address other cultural issues faced by young Muslims. Through their animated videos surrounding protagonist ‘Average Mohamed’, a young boy in the United States, they initiate positive counter-speech among viewers.

Speech Gone Wrong- Shortcomings of Counter-Speech

The previously mentioned ‘Don’t be a puppet’ initiative has been criticized for employing bigoted narratives themselves. Their counter-narrative has been criticized for being anti-Islamic.

In addition to claims of bigotry, a few of the government led initiatives have also been criticized for being opaque. Earlier this year, the White House organized a summit on Countering Violent Extremism (CVE), during which multi-million dollar plans were initiated. Following the summit, a Senate Sub-committee was instituted along and a sizeable proportion of the 2017 fiscal budget was allocated to CVE. However, lawsuits have been filed under the Freedom of Information Act, demanding details about the initiatives.

More importantly, the impact or success of counter-speech has not been substantiated. In the ISD study for instance, the researchers have stated that determining the success or outcome of counter-speech initiatives is “extremely difficult”. Faced with limitations, their methodology is based onthe ‘sustained engagement’ they had with the users. This engagement was measured by comments, tweets and messages exchanged between the counter-speech organization and the user.

Lastly, referring back to our previous post, some private organizations have also removed content under the guise of counter-speech. Facebook in collaboration with the Online Civil Courage Initiative (OCCI) vowed to employ counter-speech online, stating that it was more effective than censorship. However, as evidenced from OCCI’s manual, the organization was allowed to takedown ‘antagonistic’ content, leading to censorship.

Future of Counter Speech

While counter-speech suffers from fewer setbacks as compared to other regulatory techniques, it needs more transparency to function better. As of now, there are no universally applicable guidelines for counter-speech. Guidelines and rules could help establish transparency and avoid instances of censorship or bigotry from disseminating.

Indian Response to Online Extremism

The United Nations General Assembly resolution adopted in July 2016 highlights the need to counter extremist narratives online. In the recent past, extremist content, usually content aiding terrorist activity has become a global concern. This post examines the methods adopted by state authorities and private entities to counter such online extremism.


The response to growing use of the Internet to spread messages of terror and hate has been an increase in the censorship of online content. In a reply to the Lok Sabha on the use of social media to spread terrorism, the government acknowledged that the ‘potential for spread of terror through social media was higher than ever.’  The government highlighted in its reply that it restricts the spread of terrorism on social media by taking prompt action to block content and by regularly monitoring social media sites with the help of security / Intelligence agencies. Additionally, it highlighted that intermediaries were prohibited from hosting objectionable or unlawful content as per the intermediary guidelines.

Blocking access to websites/URLs has been used frequently by the government to suppress extremist narratives online. In December 2014, Internet Service Providers were directed to block 32 websites which included file-sharing websites like ‘Vimeo’, ‘Dailymotion’, web-hosting services like ‘Weebly’ and software code repositories like ‘Github’ and ‘Sourceforge’. Reportedly, the block was based on an advisory issued by the Anti-terrorism squad (ATS) that the sites were hosting anti-India content relating to ISIS.  Subsequently, some of these websites were unblocked, after they signed an undertaking stating that they would not allow such propaganda information to be hosted and will work with the Government to remove such content. Further, in January 2016, the chief of ATS disclosed that 94 websites linked to the ISIS had been blocked in 2015 as well. In February 2016, the Government’, an online academic repository of Jihadist primary source material, their analysis and translations of documents. The website continues to remain blocked. The blocking of websites that host legitimate content like ‘’ or ‘Vimeo’ indicates that these blocking orders are rarely executed in a targeted manner, and often end up being over-broad.

Public access to websites is blocked under Section 69A of the Information Technology Act, 2000 and the rules framed under it.  Blocking of websites takes place when ‘nodal officers’ appointed by government agencies send in requests for blocking of access to information or in case of court ordered block. These requests are reviewed by a committee and a ‘designated officer’ who chairs the committee issues approved orders of blocking of access to the service providers.  This procedure was upheld by the Supreme Court in Shreya Singhal v Union of India. The Court also held that the procedure to block websites required written reasons for blocking to be stated in the order issued by the designated officer, as well as the right to a pre-decisional hearing. However, these safeguards are seldom followed. Further, the blocking process continues to be shrouded in secrecy. The blocking rules require strict confidentiality to be maintained regarding any request or complaint received and any action taken by the government to subsequently block websites. This lack of transparency, absent or insufficient reasons and over-broad orders pose a continuous threat to the freedom of expression.

However, the state response has not been limited to censorship. It was reported that the ATS, Maharashtra would soon be launching its own website to propagate a counter-narrative. Further, the ATS chief disclosed that the police would also attempt to de-radicalise the youth. Earlier in February 2015, the Maharashtra ATS also began an intervention programme in educational institutions to initiate dialogue with the youth and prevent radicalisation. In October 2016, it was reported that due to a growing concern about radicalisation of youth by terrorist outfits like ISIS, the Ministry of Home Affairs (MHA) appointed an advisor on cyber and social media. The adviser will work with MHA to develop strategies to track and counter radicalisation on social media.


The Indian Government is not alone in its concerns about online extremism. CCG has traced the global response and alternate regulatory methods adopted by private parties here. In India, private corporations like Facebook have also responded to online extremism.

Since July 2016, India has witnessed wide-spread censorship in Kashmir ranging from suppression of newspapers to Internet shutdowns. This is in response to the ongoing protests against the death of Burhan Wani, commander of Hizbul Mujahideen. The censorship ranges from suppression of newspapers to Internet shutdowns. Amidst this, Facebook has also blocked accounts and taken down content regarding Kashmir from across the globe for violation of its ‘community standards’ that prohibit content that praises or supports terrorists. For instance, Tomoghna Halder, a student from the University of California, has been repeatedly blocked from posting when he uploaded pictures of graffiti on Kashmir.  Similarly, a video, posted by a local daily, featuring separatist leader Syed Ali Geelani’s arrest was removed and a Kashmir based satire page ‘Jajeer Talkies’ was also blocked. In another instance of private censorship, Facebook disabled the account of activist and lecturer Huma Dar for her pro-Burhan posts. It is clear that in the wake of the conflict in Kashmir, Facebook has resorted to privatized censorship to curb what it deems as ‘extremist’ reactions.

Private censorship of what is categorised as ‘extremist’ content is not unique to Facebook. Since February 2016, Twitter has suspended 235,000 accounts globally for violation of their policies that prohibit promotion of terrorism. Reports also indicate that YouTube and Facebook will use automation to silently block extremist videos. This form of private censorship raises a host of concerns. The chilling effect on speech and over-blocking of content are foremost among these.  However, the determination of which content ‘praises or supports terrorists’ by Facebook and other intermediaries, raises larger concerns. It permits Facebook to clampdown on alternate voices, as is evident from instances related to the Kashmir conflict.  Consequently, by the exercise of this power, Facebook – and other intermediaries- acquire the ability to influence the online narrative on these issues. This poses a severe threat to freedom of speech and expression.


The response of the Indian State to online extremism has been censorship of content. This has also been accompanied by censorship of content by private players. Presently, the use of counter-speech as an effective tool in countering the extremist narratives is limited, though not absent. However, there remains little evaluation of what is ‘online extremism’ and under what circumstances such content should be limited. Due to the opaque system of blocking websites, the State is able to limit judicial scrutiny. Further, there is an absence of an effective remedy in instances of private censorship. There are few avenues available to users in case of wrongful takedowns. The absence of an effective policy has led to frequent over-blocking and silencing of alternate voices. This underscores the need to examine what constitutes ‘online extremism’ and the most effective mechanisms to counter it.

Online Extremism and Hate Speech – A Review of Alternate Regulatory Methods


Online extremism and hate speech on the internet are growing global concerns. In 2015, the EU signed a code of conduct with social media companies including Facebook, Google and Twitter to effectively regulate hate speech on the internet. The code, amongst other measures, discussed stricter sanctions on intermediaries (social media companies) in the form of a ‘notice and takedown’ regime, a practice which has been criticised for effectively creating a ‘chilling’ effect and leading to over-censorship.

While this system is still in place, social media companies are attempting to adopt alternative regulatory methods. If companies could ensure that they routinely track their websites for illegal content, before government notices are issued, this could save them time and money. This post will attempt to offer some insight into alternative modes of regulation used by social media companies.

 YouTube Heroes – Content Regulation by Users

YouTube Heroes was launched in September, 2016 with the aim of efficiently regulating content. Under this initiative, YouTube users are allowed to ‘mass-flag’ content that goes against the Community Guidelines. The Community Guidelines specifically prohibit instances of hate speech. As per the Guidelines, content that “promotes violence or hatred against individuals based on certain attributes would amount to hate speech”. These ‘attributes’ include but are not limited to race, gender and religion.

‘Mass-flagging’ is just one of the many tools available to a YouTube Hero. The system is based on points and ranks, with users generating points for helping translate videos and for flagging inappropriate content. As they climb up the ranking system, users become privy to exclusive deals, like the ability to directly contact YouTube staff. ‘Mass-flagging’ is in essence the same as flagging a video, an option that YouTube already offered. However, the incentive of gaining access to private moderator forums and YouTube staff could lead to users flagging videos for extraneous reasons. While ‘mass-flagged’ videos are reviewed by YouTube moderators before being taken down, the initiative has still raised concerns.

It has been criticised for giving free rein to users, who may flag content because of personal biases, leading to ‘harassment campaigns’. Popular YouTube users have panned YouTube heroes, apprehending the possibility of their videos being targeted by ‘mobs’. Despite the review system in place, users have also expressed doubts about YouTube’s ability to accurately take down flagged content. Since the initiative is in its testing stage, it is difficult to determine what its outcome could be.

Facebook’s Online Civil Courage Initiative – Counter Speech

Governmental authorities across the world have been attempting to curb hate speech and online extremism in myriad ways. For instance, in November, 2015, an investigation involving one of Facebook’s European Managing Directors was launched. The Managing Director was accused of letting Facebook host hate speech. As the investigation drew to an end, Facebook representatives were not implicated. However, this investigation marked an increase in international pressure to effectively deal with hate speech.

Due to growing pressure from governmental authorities, Facebook began to  ‘outsource’ content removal.  In January of 2016, a German company called ‘Arvato’, was delegated the task of reviewing and taking down reported content, along with Facebook’s Community Operations Team. There is limited public information on the terms of service or rules Arvato is bound by. In the absence of any such information, ‘outsourcing’ could contribute to a private censorship regime. With no public guidelines in place, the outsourcing process is not transparent or accountable.

Additionally, Facebook has been working with other private bodies to regulate content online. Early in 2016, Facebook, in partnership with several NGOs, launched the Online Civil Courage Initiative (OCCI) to combat online extremism with counter-speech.   COO Sheryl Sandberg said that ‘censorship’ would not put an end to hate speech and that counter-speech would be a far more effective mode of regulation. Under this initiative, civil societies and NGO’s are ‘rewarded’ with ad credits, marketing resources, and strategic supportfor countering speech online.

It is pertinent to note that the Information Pack on Counter Speech Engagement is the only set of guidelines made public by OCCI. These guidelines provide information to plan a counter speech campaign. An interesting aspect of the information pack is the section on ‘Responding and Engaging during a campaign’. Under this section, comments are categorised as ‘supportive, negative, constructive, antagonistic’. A table suggests how different categories of comments should be ‘engaged with’. Surprisingly, ‘antagonistic’ comments should be ‘ignored, hidden or deleted’.  The information pack does not attempt to define any of the above categories. These vaguely worded guidelines could lead to confusion amongst NGOs. While studies have shown that counter-speech might be the most effective way to deal with online extremism, OCCI would have to make major changes to reach the goals of the counter-speech movement.

In October 2016, Facebook has reportedly come under the radar again. A German Federal Minister has stated that Facebook was still not effectively dealing with hate speech targeted at refugees and another investigation might be in the pipeline.


 It is yet to be seen whether the alternative regulatory methods adopted by social media companies will effectively deal with hate speech and online extremism.

It is important to note that social media companies are ‘outsourcing’ internal regulation to private bodies or users (YouTube Heroes, Arvato and OCCI). These private bodies might amplify the problems being faced by the intermediary liability system, which could lead to ‘over-censorship’. The system has been criticised for its ‘notice and takedown’ regime. Non-compliance of these takedown orders would attract strict sanctions. Fear of these sanctions could lead intermediaries to takedown content which could be in grey areas, but are not illegal.

However, under the internal regulation method, social media companies will continue to function under the fear of state pressure. Private bodies like Arvato and NGOs in affiliation with OCCI will also regulate content, with the incentive of receiving ‘advertisement credit’ and ‘points’.  This could lead to over-reporting for the sake of incentives. Coupled with pressure from the state, this might lead to a ‘chilling’ effect.

In addition, some of these private bodies do not operate in a transparent manner. For instance, providing public information on Arvato’s content regulation activities and the guidelines they are bound by would help create a far more accountable system. Further, the OCCI needs to have clearer, well-defined policies to fulfill the objectives of disseminating counter-speech.