Understanding the ‘NetzDG’: Privatised censorship under Germany’s new hate speech law

By William James Hargreaves

The Network Enforcement Act

The Network Enforcement Act (NetzDG), a law passed on the 30th of June by the German Government operates to fine social media companies up to 50 million Euros – approximately 360 crore rupees – if they persistently fail to remove hate speech from their platform within 24 hours of the content being posted. Companies will have up to one week where the illegality of the content is debatable.

NetzDG is intended to hold social media companies financially liable for the opinions posited using their platform. The Act will effectively subject social media platforms to the stricter content standards demanded of traditional media broadcasters.

Why was the act introduced?

Germany is one the world’s strictest regulators of hate speech. The State’s Criminal Code covers issues of defamation, public threats of violence and incitement to illegal conduct, and provides for incarceration for Holocaust denial or inciting hatred against minorities. Germany is a country sensitive to the persuasive power of oratory in radicalizing opinion. The parameters of these sensitivities are being tested as the influx of more than one million asylum seekers and migrants has catalyzed a notably belligerent public discourse.

In response to the changing discourse, Facebook and a number of other social media platforms consented in December 2015 to the terms of a code of conduct drafted by the Merkel Government. The code of conduct was intended to ensure that platforms adhered to Germany’s domestic law when regulating user content. However, a study monitoring Facebook’s compliance found the company deleted or blocked only 39 percent of reported content, a rate that put Facebook in breach of the agreement.

NetzDG turns the voluntary agreement into a binding legal obligation, making Facebook liable for any future failure to adhere to it’s terms.

In a statement made following the law’s enactment, German Justice Minister Heiko Maas declared ‘With this law, we put an end to the verbal law of the jungle on the Internet and protect the freedom of expression for all… This is not a limitation, but a prerequisite for freedom of expression’. The premise of the position of Minister Maas, and the starting point for the principles that validate the illegality of hate speech, is that verbal radicalization is often time the precursor to physical violence.

As the world’s predominant social media platform, Facebook has curated unprecedented, and in some respects, unconditioned access to people and their opinions. With consideration for the extent of Facebook’s access, this post will focus on the possible effects of the NetzDG on Facebook and it’s users.

Facebook’s predicament

  • Regulatory methods

How Facebook intends to observe the NetzDG is unclear. The social media platform, whose users now constitute one-quarter of the world’s population, has previously been unwilling to disclose the details of their internal censorship processes. However given the potential financial exposure, and the sustained increase in user content, Facebook must, to some extent, increase their capacity to evaluate and regulate reported content. In response, Facebook announced in May that it would nearly double the number of employees tasked with removing content that violated their guidelines. Whether this increase in capacity will be sufficient will be determined in time.

However, and regardless of the move’s effectiveness, Facebook’s near doubling of capacity implies that human interpretation is the final authority, and that implication raises a number of questions: To what extent can manual censorship keep up with the consistent increase in content? Can the same processes maintain efficacy in a climate where hate speech is increasingly prevalent in public discourse? If automated censorship is necessary, who decides the algorithm’s parameters and how sensitive might those parameters be to the nuances of expression and interpretation? In passing the NetzDG, the German Government has relinquished the State’s authority to fully decide the answer to these questions. The jurisdiction of the State in matters of communication regulation has, to a certain extent, been privatised.

  • Censorship standards

Recently, an investigative journalism platform called ProPublica claimed possession of documents purported to be internal censorship guidelines used at Facebook. The unverified guidelines instructed employees to remove the phrase ‘migrants are filth’ but permit ‘migrants are filthy’. Whether the documents are legitimate is to some extent irrelevant: the documents provide a useful example of the specificity required where the aim is to guide one person’s interpretation of language toward a specific end – in this instance toward a correct judgment of legality or illegality.

Regardless of the degree of specificity, it is impossible for any formulation of guidelines to cover every possible manifestation of hate speech. Thereby interpreting reported content will necessarily require some degree of discretion. This necessity begs the question: to what extent will affording private entities discretionary powers of censorship impede freedoms of communication? Particularly where the discretion afforded is conditioned by financial risk and a determination is required within a 24-hour period.

  • Facebook’s position

Statements made by Facebook prior to the legislation’s enactment expressed concern for the effect the Act will have on the already complex issue of content moderation. ‘The draft law provides an incentive to delete content that is not clearly illegal when social networks face such a disproportionate threat of fine’ a statement noted. ‘(The Act) would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies’. Facebook’s reservation is telling: the company’s reluctance to adopt the role of moderator to the extent required alludes to the potential consequences of the liability imposed by the Act. 

The problem with imposing this form of liability

 Any decision made by a social media platform to censor user content will be supported by the anti-discrimination principles prescribed by the NetzDG. However, where the motivation behind discretionary decision-making shifts away from social utility towards financial management the guiding considerations become efficiency and risk minimisation. Efficiency and risk minimisation in this instance requires Facebook to either (i) increase capacity, which in turn results in an increased financial burden, or (ii) adopt guidelines that minimise exposure.

Seemingly the approach adopted by Facebook is to increase capacity. However, Facebook’s concerns that the Act creates financial incentives to adopt guidelines that minimise exposure are significant. Such concerns demonstrate an understanding that requiring profit motivated companies to do the work of the State within a 24-hour time frame will necessarily require a different set of parameters than those imposed on the regulation of oral hate speech. If Facebook, in drafting and applying those parameters, decides to err on the side of caution and, in some instances, censor otherwise legal content, that decision will have directly infringed the freedom of communication enjoyed by German citizens.

A democracy must be able to accommodate contrasting opinions if it purports to respect rights of communication and expression. Conversely, limitations on rights enjoyed may be justified if they benefit the majority. The NetzDG is Germany’s recognition that the nature of online communication – the speed at which ideas promulgate and proliferate, and the disconnect between comment and consequence created by online anonymity – require the existing limitations on the freedom of communication be adapted. Whether instances of infringement, are warranted in the current climate is a difficult and complicated extension of the debate between the utility of regulating hate speech and the corresponding consequences for the freedoms of communication and expression. The decision to pass the NetzDG suggests the German Government considers the risk of infringement is acceptable when measured against the consequences of unfettered hate speech.

Public recognition that NetzDG poses a risk is important. It is best practice that within a democracy, any new limit to liberty, oral or otherwise, be questioned and a justification given. Here the justification seems well-founded. However the answers to the questions posed by sceptics may prove telling as Germany positions itself at the forefront of the debate over online censorship.

(William is a student at the University of Melbourne and is currently interning at CCG)

How (not) to get away with murder: Reviewing Facebook’s live streaming guidelines

Introduction

The recent shooting in Cleveland live streamed on Facebook has brought the social media company’s regulatory responsibilities into question. Since the launch of Facebook Live in 2016, the service’s role in raising political awareness has been acknowledged. However, the service has also been used to broadcast several instances of graphic violence.

The streaming of violent content (including instances of suicide, murders and gang rapes) has raised serious questions about Facebook’s responsibility as an intermediary. While it is not technically feasible for Facebook to review all live videos while they’re being streamed or filter them before they’re streamed, the platform does have a routine procedure in place to take down such content. This post will visit the guidelines in place to take down live streamed content and discuss alternatives to the existing reporting mechanism.

What guidelines are in place?

Facebook has ‘community standards’ in place.  However, their internal regulation methods are unknown to the public. Live videos have to be in compliance with ‘community standards’, which specifies that Facebook will remove content relating to ‘direct threats’, self-injury’, ‘dangerous organizations’, ‘bullying and harassment’, ‘attacks on public figures’, ‘criminal activity’ and ‘sexual violence and exploitation’.

The company has stated that it ‘only takes one report for something to be reviewed’.  This system of review has been criticized since graphic content could go unnoticed without a report. In addition, this form of reporting would be unsuccessful since there is no mandate of ‘compulsory reporting’ for the viewers.  Incidentally, the Cleveland shooting video was not detected by Facebook until it was flagged as ‘offensive’, which was a couple of hours after the incident. The company has also stated that they are working on developing ‘artificial intelligence’ that could help put an end to these broadcasts. However, they currently rely on the reporting mechanism, where ‘thousands of people around the world’ review posts that have been reported against. The reviewers check if the content goes against the ‘community standards’ and ‘prioritize videos with serious safety implications’.

While deciding if a video should be taken down, the reviewers will also take the ‘context and degree’ of the content into consideration. For instance, content that is aimed at ‘raising awareness’, even if it displays violence, will be allowed. However, content that is celebrating such violence would be taken down. To demonstrate, when a live video of civilian Philando Castile being shot by a police officer in Minnesota went viral, Facebook kept the video up on their platform, stating that it did not glorify the violent act.

 Regulation

Other than the internal guidelines by which Facebook regulates itself, there haven’t been instances of government regulators, like the United States’ Federal Communications Commission intervening. Unlike the realm of television, where the FCC regulates content and deems material ‘inappropriate’, social media websites are protected from content regulation.

This brings up the question of intermediary liability and Facebook’s liability for hosting graphic content. Under American Law, there is a distinction between ‘publishers’ and ‘common carriers’. A common carrier only ‘enables communications’ and does not ‘publish content’. If a platform edits content, it is most likely a publisher. A ‘publisher’ has a higher level of responsibility for content hosted on their platform, unlike a ‘carrier’. In most instances, social media companies are covered under Section 230 of the Communications Decency Act, a safe harbor provision, by which they would not be held liable for third-party content.  However, questions have been raised about Facebook’s role as a ‘publisher’ or ‘common carrier’, and there seems to be no conclusive answer.

Conclusion

Several experts have considered possible solutions to this growing problem. Some believe that such features should be limited to certain partners and should be opened up to the public once additional safeguards and better artificial intelligence technologies are in place. In these precarious situations, enforcing stricter laws on intermediaries might not resolve the issue at hand. Some jurisdictions have ‘mandatory reporting’ provisions, specifically for crimes of sexual assault. In India, under Section 19 of the Protection of Children from Sexual Offences Act, 2012 ‘any person who has apprehension that an offence…is likely to be committed or has knowledge that such an offence has been committed’ has to report such an offence. In the context of cyber-crimes, this system of ‘mandatory reporting’ would shift the onus on the viewers and supplement the existing reporting system. Mandatory provisions of this nature do not exist in the United States where most of the larger social media companies are based.

Similarly, possible solutions should focus on strengthening the existing reporting system, rather than holding social media platforms liable.

Response to Online Extremism: Beyond India

In our previous posts, we traced the Indian response to online extremism as well as the alternate regulatory methods adopted worldwide to counter extremist narratives spread via the internet. At the international level, the United Nations has emphasised upon the need to counter extremists who use the internet for propaganda and recruitment. This post explores the responses of three countries – UK, France and USA – that have often been the target of extremism. While strategies to counter extremism form part of larger counter-terror programmes, this post focuses on some measures adopted by these States that target online extremism specifically.

United Kingdom

In 2011, the UK adopted a ‘prevent strategy’ which seeks to ‘respond to the ideological challenge’ posed by terrorism and ‘prevent people from being drawn into terrorism’. This strategy seeks to counter ‘extremism’ which is defined as:

“vocal or active opposition to fundamental British values, including democracy, the rule of law, individual liberty and mutual respect and tolerance of different faiths and beliefs. We also include in our definition of extremism calls for the death of members of our armed forces”.

This definition has been criticised as being over-broad and vague, which can potentially ‘clamp-down on free expression’. In 2013, the Prime Minister’s Task Force on Tackling Radicalisation and Extremism (“Task Force”) submitted its report identifying the critical issues in tackling extremism and suggesting steps for the future. The Task Force recommended that the response to extremism must not be limited to dealing with those who promote violence – rather, it must target the ideologies that lead individuals to extremism. The report highlighted the need to counter extremist narratives, especially online. Some of its recommendations include building capabilities, working with Internet companies to restrict access to such material, improving the process for public reporting of such content and including extremism as a filter for content accessed online. The report also recommended the promoting of community integration and suggested steps to prevent the spread of extremist narratives in schools and institutions of higher education. While suggesting these methods, the report reaffirmed that the proposals are not designed to ‘restrict lawful comment or debate’.

A number of recommendations made by the Task Force have been adopted in the UK subsequently. For instance, the UK Government has set up a mechanism by which individuals can anonymously report online material promoting terrorism or extremism. Universities and colleges became legally bound to put in place policies to prevent extremist radicalization on campuses in 2015. Further, local authorities, the health sector, prisons and the police have all been accorded duties to aid in the fight against extremism.

UK is also considering a Counter-Extremism and Safeguarding Bill (the “Bill”) which proposes to bring in tougher counter extremism measures. The Bill empowers certain authorities to ban extremist groups, disrupt individuals engaging in extremist behaviour and close down premises that support extremism. However, the Bill has been criticised extensively by the Parliament’s Joint Committee on Human Rights. The Committee identified gaps such as the failure to adequately define core issues like ‘non-violent extremism’ and the use of measures like ‘banning orders’ which are over-broad and susceptible to misuse.

France

Reports reveal that France has become the largest source of Western fighters for the Islamic State and nearly 9000 radicalized individuals are currently residing in France. Over the last few years, France has also witnessed a series of terrorist attacks, which has resulted in bolstering of the counter-terrorism and counter-extremism measures by the country.

In November 2014, the French parliament passed an anti-terror legislation that permits the government to block websites that ‘glorify terrorism’ and censor speech that is deemed to be an ‘apology for terrorism’, among other measures. A circular released in January 2015 explains that “apology for terrorism” refers to acts which present or comment on instances of terrorism “while basing a favourable moral judgement on the same”.  In 2015, France blocked five websites, in one of the first instances of censoring anti-jihadist content. Since then, France has continued to censor online speech for the broad offence of ‘apology for terrorism’ with harsh penalties. It has been reported that nearly 87 websites were blocked between January to November 2015; and more than 700 people have been arrested under this new offence of ‘apology for terrorism’. The offence has been criticised for being vague, resulting in frequent prosecution of legitimate speech that does not constitute incitement to violence. In May 2015, another law was passed strengthening the surveillance powers of the State requiring Internet Service Providers to give unfettered access to intelligence agencies. This statute empowers authorities to order immediate handover of user data without prior court approvals. These legislations have been criticised for being over-broad and incorporating measures that are unnecessary and excessive.

In addition to these measures, France also launched an anti-Jihadism campaign in 2015 which seeks to counter extremism and radicalization throughout the society, specifically focusing on schools and prisons.

United States

The principle institution that develops counter-extremism strategies in the USA is the Bureau of Counterterrorism and Countering Violent Extremism. The bureau has developed a Department of State & USAID Joint Strategy on Countering Violent Extremism. The strategy aims to counter efforts by extremist to radicalize, recruit and mobilize followers to violence. To pursue this aim, the strategy incorporates measures like enhanced bilateral and multilateral diplomacy, strengthening of the criminal justice system and increased engagement with different sectors like prisons, educational institutions and civil society. Promoting alternate narratives is a key component of the counter-extremism programme of the bureau.  However, it is important to note that this strategy has also been criticised for revealing very few details about what it entails, despite extensive budget allocations. A lawsuit has been filed under the Freedom of Information Act claiming that authorities have denied revealing information about this programme. Organisations fear that the initiatives under the programme have the potential of criminalizing legitimate speech and targeting certain communities.

Conclusion

State responses towards extremism have increased substantially in the past few years with new programmes and measures being put in place to counter these narratives in the fight against terrorism. While the measures adopted differ from state to state – some strategies like promoting de-radicalisation in educational institutions and prisons are commonly present. At the same time, some of the measures adopted threaten to impact freedom of speech due to vague definitions and over-broad responses. It is critical for authorities to strike a balance between countering extremist narratives and preserving free thought and debate, more so in institutions of learning. Consequently, measures to counter extremist narratives must be specific and narrowly tailored with sufficient safeguards in order to balance the right to security with civil liberties of individuals.

John Doe orders: The Balancing Act between Over-Blocking and Curbing Online Piracy

The Bombay High court recently passed a John Doe order laying down a set of safeguards to minimise over-blocking. The Delhi High Court on the other hand ordered blocking of 73 websites for showing “substantial” pirated content. This blog post traces the history of John Doe orders in India, their impact on free speech and evaluates the recent developments in this area.

John Doe Orders and their Impact on Freedom of Speech

John Doe or Ashok Kumar orders usually refer to ex-parte interim injunctions issued against defendants, some of who may be unknown or unidentified at the time of obtaining the order. Well recognised in commonwealth countries, this concept was imported to India in 2002 by an order passed against unknown cable operators to give relief to a TV channel in the case of Taj Television v. Rajan Mandal. The trend to issue John Doe orders to prevent piracy picked up pace in 2011 when the Delhi High Court passed a series of such orders. Since then, a stream of such orders has been passed authorising copyright holders to take action against unknown persons for violation of their right against piracy (in the future) without moving to the court again. The orders authorise copyright holders to intimate ISPs to take down the allegedly violating content. In 2012, the Madras High Court clarified an earlier order (which had resulted in blocking of a number of websites) stating that it pertained only to specific URLs and not websites. Despite this, John Doe orders for blocking of websites are common place.

John Doe orders are passed as ex-parte orders due to paucity of time and the difficulties in identifying defendants in such cases. However, these orders threaten to impair freedom of speech online due to a host of problems. First, these orders are given on the basis of a mere ‘possibility’ of piracy with no requirement to establish piracy before the court post blocking. This paves the way for negligence or misuse of the power to take down content by copyright holders. Second, these orders are usually passed on a minimal standard of evidence on the word of the plaintiff without sufficient scrutiny by the court of the URLs/websites submitted. Third, they do not require the copyright holders or ISPs to inform the reasons for blocking to the persons whose content is taken down – leaving almost no recourse to those whose website/URL may be blocked mistakenly. Fourth, the burden of carrying out these orders falls on the ISPs who block the websites/URLs erring on the side of caution.

The absence of scrutiny and the lack of safeguards lead to over-blocking of content. Users who suffer as a result of these over-broad orders often lack the knowledge or means to overturn these orders resulting in the loss of legal and legitimate speech online. Further, without any requirement for reaffirmation of the blocks from the court –private parties (the copyright holders) themselves become adjudicators of copyright violations hampering the rights of users affected by these orders.

Instances of over-blocking as a result of these orders are many. In May 2012, as a result of an order by the Madras High Court a range of websites were blocked including legitimate content on video sharing sites like Vimeo. In 2014, the Delhi High court had issued a John Doe order mandating blocking of 472 websites including Google documents in wake of the FIFA world cup. Many questioned such widespread blocking under the mere assumption that the websites would support pirated screening of the FIFA world cup especially without verification by the courts. The order was later tailored down.

The Bombay High Court Order

The jurisprudence regarding John Doe orders saw a shift when Justice Patel from the Bombay High Court took a huge step forward with his order dated July 26 2016 for the movie Dishoom. This order recognises both the harms of piracy and the adverse impact of John Doe orders on unknown defendants as it attempts to balance the ‘competing rights’. The order lays down a multi-tier process to minimise the negative impacts of John Doe orders and tailors down blocking from entire websites (except in certain conditions) to URLs (see chart below).

bombay high court final

The order sets in place a mechanism that provides for selective blocking of content, verification of the list of URLs as well as safeguards for the unknown defendants.  Such a mechanism helps ensure that freedom of speech online is not trampled in the fight against online piracy.

The Delhi High Court Judgement

The Delhi High Court in its judgement (not a John Doe order) dated July 29 2016 blocked 73 websites in a case regarding live streaming of pirated videos of cricket matches. While some lauded this judgement for its contribution to India’s fight against piracy, it is important to understand its many failings.

The high court blocked the websites on a ‘prima facie’ view of the material placed before it by the plaintiffs that the websites were entirely or to a large extent carrying out piracy. However, it remains unclear as to what standard was used by the court to determine the extent of piracy. Further, complete blocking of websites would encroach upon the right to carry on business and freedom of expression of the other party therefore the standard for placing such a restriction must be high and well defined. This was recognised by Justice Patel in the Bombay High Court order – where the court clarified that while there is no prohibition on blocking of entire website there is a need for a ‘most comprehensive audit or review reasonably possible’ to establish that the website contains ‘only’ illicit content.

Even though the judgment of the Delhi High Court is against 73 named defendants, the order was passed merely on a prima facie review of material laid before it which displays a lack of system for verification. A mere prima facie review of material is insufficient as a third party whose website is mistakenly blocked would suffer unnecessarily. With no directions to put out a notice of information to the defendants – he/she may not even be aware. Thus, the Delhi High Court order suffers from various problems that pave way for over-blocking of content. The judgement also places an unfair burden on the government and raises questions regarding the role of the intermediary– which has been articulated in detail in a post by Spicy IP here.

Questions for the Future

These developments raise a number of issues for the future – the most prominent being the need to reconcile the differing legal developments on the issue of online piracy across India. Further, there is a need for the courts to be more sensitive to the plight of unknown third parties while passing John Doe orders following the lead of Bombay High Court.

Two particular issues come to light amongst this mess. First, the need to develop a standard while blocking of complete websites that is sufficiently high to prevent misuse and over-blocking. Second, to develop a neutral body that can verify the lists for blocking on behalf of the courts – that ensures sufficient checks in the system while keeping in mind the paucity of time in these cases. However, any such body must possess the technical know-how to understand how these lists are put together and other related issues. The lag between technology and law is very real- as correctly pointed out by Justice Patel- and these small steps will go a long way in bridging these gaps.

(For some more reading on this issue you can look at a piece published in the Mint here and for a more detailed reading on the Bombay High Court order you can read the Spicy IP post here)

Internet Shutdowns: An Update

At the time of posting, mobile internet services continue to remain suspended in parts of Jammu & Kashmir for the sixth consecutive day. The shutdown was enforced in response to the tense law and order situation prevailing in the Kashmir valley following the death of Burhan Wani, a top commander in the terrorist outfit Hizbul Mujahideen.

This shutdown, already the fourteenth this year in India, comes on the heels of the adoption of a resolution by the UN Human Rights Council (UNHRC) on the “promotion, protection and enjoyment of human rights on the internet”. Although the resolution stops short of recognizing access to the internet as a human right, it affirms that human rights exercised offline should also be protected online. The UN HRC had previously resolved to protect human rights online in its 2012 and 2014 sessions, but this resolution marks a significant improvement as it specifically comments on the hitherto unaddressed issue of internet shutdowns. It condemns measures that “disrupt access to or dissemination of information online, in violation of international human rights law” and calls on states to refrain from such measures. The resolution is timely as it comes at a juncture when governments worldwide are, at an increasing frequency, adopting various strategies to shut down the internet. The internet has been shut down by governments to counter problems ranging from civil unrest or uprisings such as in Zimbabwe most recently and even to prevent cheating in exams such as in Gujarat earlier this year.

Contrary to media reports that India had voted against the resolution, India voted in favour of three amendments to the resolution mooted by Russia and China. While commentators have been divided on whether on these amendments are antithetical to the spirit of the resolution, India did vote on an amendment to weaken the emphasis on the “human rights-based approach” conceived of originally in the resolution.   The cruel irony of this is amplified in the context of the questionable human rights record of the armed forces and the police in Jammu and Kashmir- which has experienced the highest number of shutdowns in the country.

In our previous posts, we had argued that the implementation of shutdowns under Section 144 of the Code of Criminal Procedure, 1973 suffered from fatal over-breadth and was constitutionally unviable. In practical terms, the hazards of implementing a widespread internet shutdown simply cannot be understated. The suspension of the internet, especially in situations of riot or violence, becomes especially problematic for citizens. As reported in Jammu and Kashmir, the lack of reliable information through communication channels has contributed to the perpetration of rumours and the worsening of the situation in many parts of the valley. The communication breakdown has adversely affected the provision of much-needed health and emergency services in addition to disrupting trade and commerce significantly. The collateral damage of internet shutdowns becomes especially relevant when considered against the prism of the Government’s stated mission in endorsing programs like Digital India and the Smart Cities mission which will rely substantially on the internet for smooth functioning and delivery of services. With the mechanics of everyday life being increasing intertwined in the internet, it is essential to ask the question whether the internet should be shut down without procedural transparency.

As previously stated, India is not alone in implementing internet and communication network shutdowns of this nature. Not surprisingly, even in jurisdictions with a strong tradition of respect for free speech, executive procedures relating to shutting down the internet and other communication services at the government’s instance remain shrouded in secrecy. In the US, for instance, a policy known as the Standard Operating Procedure 303 allows for the shutdown of cell-phone services anywhere in the country in the event of a crisis situation. As in India, on account of the lack of transparency and accountability, activists fear that the power may be abused. A petition that sought more information on the protocol was declined by the Supreme Court of the United States.  In the UK too, a localized mobile network shutdown implemented by the City of London Police following the terrorist bombings in London came in for heavy criticism, having affected over a million individuals’ communications. A review committee found that the protocol needed to be reviewed and restructured to provide for adequate and effective procedures to follow.

Additionally, the conversation on internet shutdowns is also increasingly focused on the prospect of shutting down the internet in the event of a cyber attack. In the UK, for instance, specific legislations enable Government ordered suspension of the internet to bring about “web Armageddon”. In India, the debates and discourse around internet shutdown are nascent yet- but will only acquire increasing significance. The Government is in the process of considering amendments to the Information Technology Act, 2000 to ramp up cyber security provisions. As we progress toward systems that are completely digitized, the likelihood of cyber-attacks will only increase- which then begs the question of whether the Government can choose to shut down the internet and what procedures it is bound by in doing so.

The internet is a great enabler of democracy – having greatly lowered the hurdles to free speech and assembly. Any attempts at shutting down the internet must necessarily be accompanied by structured efforts to avoid the arbitrary exercise of such power. The imminent threat of an Emergency-like situation gagging the internet may seem alarmist at the moment- but there certainly needs to be an active and concerted effort to examine the legality and necessity of shutdowns while putting in place strict procedural standards.

Google de-platforms Taliban Android app: Speech and Competition implications?

Written by Siddharth Manohar

About a few weeks ago, Google pulled an app from its online application marketplace the Google Play Store, which was developed by the Taliban for propagating violently extremist views and spreading hateful content. Google has stated that its reason for doing this is that the app violated its policy for Google Play Store.

Google maintains a comprehensive policy statement for any app developer who wishes to upload an app for public consumption on the Play Store. The policy, apart from setting up a policy for the Play Store as a marketplace, also places certain substantive conditions on developers using the platform to reach users.

Amongst other restrictions, one head reads ‘Hate Speech’. It says:

We don’t allow the promotion of hatred toward groups of people based on their race or ethnic origin, religion, disability, gender, age, veteran status, or sexual orientation/gender identity.

Google found the Taliban app to violate this stipulation in the Play Store policy, as confirmed by a Google spokesperson, who said that the policies are “designed to provide a great experience for users and developers. That’s why we remove apps from Google Play that violate those policies.” The app was first detected by an online intelligence group which claims to monitor extremist content on social media. It was developed to increase access to the Taliban’s online presence by presenting content in the Pashto language, which is widely spoken in the Afghan region.

The application itself of course still being available for download on a number of other regular websites, the content of its material led to its removal from a marketplace. This is an interesting application of the restriction of hateful speech, because the underlying principle in Google’s policy itself pays heed to the understanding that development and sale of apps forms a kind of free speech.

A potentially interesting debate in this area is the extent to which decisions on the contours of permissible speech can be decided by a private entity on its public platform. The age-old debate about the permissible restrictions on speech can find expression in this particular “marketplace of ideas” of Google Play Store. On one hand, there is the concern of protecting users from harmful and hateful content, speech that targets and vilifies individuals based on some factor of their identity, be it race, gender, caste, colour, or sexual orientation. On the other hand, there will also ever be the concern that the monitoring of speech by the overseeing authority becomes excessive and censors certain kinds of opinions and perspectives from entering the mainstream.

This particular situation provides an easy example in the form of an application developed by an expressly terrorist organisation. It would however still be useful to keep an eye out in the future for the kind of applications that are brought under the ambit of such policies, and the principles justifying these policies.

The question of what, if any, kind of control can be exercised over this kind of editorial power of Google over its marketplace is also a relevant one. Google can no doubt justify its editorial powers in relatively simple terms – it has explicit ownership of the entire platform and can the basis on which to allow developers onto it. However, the Play Store forms an overwhelmingly large percentage of how users access any application on a daily basis. Therefore, Google’s policies on the Play Store have a significant impact on how and whether applications are accessed by users in the context of the entire marketplace of applications and users. The policy implications of this are that the principles of Google’s Play Store policies need to be placed under the scrutiny of how it impacts the entire app development ecosystem. This is evidenced by the fact that the European Commission about a year ago pulled up Google for competition concerns regarding its Android operating system, and has also recently communicated its list of objections to Google. The variety of speech and competition concerns applicable to this context make it one to watch closely for developments of any kind for further analysis.

comin2getUlol

Image Source: ‘mammela’, Pixabay.

Free Speech & Violent Extremism: Special Rapporteur on Terrorism Weighs in

Written by Nakul Nayak

Yesterday, the Human Rights Council came out with an advance unedited version of a report (A/HRC/31/65) of the Special Rapporteur on protection of human rights while countering terrorism. This report in particular deals with protecting human rights while preventing and countering violent extremism. The Special Rapporteur, Ben Emmerson, has made some interesting remarks on extremist speech and its position in the hierarchy of protected and unprotected speech.

First, it should be noted that the Report tries to grapple with and distinguish between the commonly substituted terms “extremism” and “terrorism”. Noting that violent extremism lacks a consistent definition across countries and in some instances any definition at all, the Report goes on to liken it to terrorism. He also acknowledges the lack of understanding of the “radicalization process”, whereby innocent individuals become violent extremists. While the report does not suggest an approach to defining either term, it briefly contrasts the definitions laid down in various countries. However, there does seem to be some consensus on the ambit of violent extremism being broader than terrorism and consisting a range of subversive activities.

The important section of the Report, from the perspective of free speech, deals with incitement to violent extremism and efforts to counter it. The Report cites UN Resolution 1624(2005) that calls for the need to adopt legislative measures as effective means of addressing incitement to terrorism. However, the Report insists on the existence of “serious human rights concerns linked to the criminalization of incitement, in particular around freedom of expression and the right to privacy.[1] The Report then goes on to quote the UN Secretary General and the Special Rapporteur on Free Expression laying down various safeguards to laws criminalizing incitement. In particular, these laws must prosecute incitement that is directly related to terrorism, has the intention and effect of promoting terrorism, and includes judicial recourse, among other things.[2]

This gives us an opporutnity to discuss the standards of free speech restrictions in India. While the Supreme Court has expressly imported the American speech-protective standard of incitement to imminent lawless action in Arup Bhuyan, confusion still persists over the applicable standard in any justifying any restriction to free speech. The Supreme Court’s outdated ‘tendency’ test that does not require an intimate connection between speech and action still finds place in today’s law reports. This is evident from the celebrated case of Shreya Singhal. After a lengthy analysis of the public order jurisprudence in India and advocating for a direct connection between speech and public disorder, Justice Nariman muddies the water by examining section 66A of the IT Act under the ‘tendency’ test. Some coherence in incitement standards is needed.

The next pertinent segment of the Report dealt specifically with the impact of State measures on the restriction of expression, especially online content. Interestingly, the Report suggests that “Governments should counter ideas they disagree with, but should not seek to prevent non-violent ideas and opinions from being discussed.[3] This brings to mind the recent proposal of the National Security Council Secretariat (NSCS) seeking to set up a National Media Analytics Centre (NMAC) to counter negative online narratives through press releases, briefings, and conferences. While nothing concrete has come out, with the proposal still in the pipelines, safeguards must be implemented to assuage chilling effect and privacy concerns. It may be noted here that the Report’s remarks are limited to countering speech that form an indispensible part of the “radicalization process”. However, the NMAC covers negative content across the online spectrum, with its only marker being the “intensity or standing of the post”.

An important paragraph of the report- perhaps the gist of the free speech perspective in the combat of violent extremism- is the visible unease in determining the position of extremist speech glorifying and advocating terrorism. The Report notes the Human Rights Committee’s stand that terms such as “glorifying” terrorism must be clearly defined to avoid unnecessary incursions on free speech. At the same time, the “Secretary General has deprecated the ‘troubling trend’ of criminalizing glorification of terrorism, considering it to be an inappropriate restriction on expression.[4]

These propositions are in stark contrast to India’s terror legislation, the Unlawful Activities Prevention Act, 1967. Section 13 punishes anyone who “advocates, … advises … the commission of any unlawful activity …” An unlawful activity has been defined in section 2(o) to include speech acts that

  • supports a claim of “secession of a part of the territory of India from the Union” or,
  • which disclaims, questions … the sovereignty and territorial integrity of India” or,
  • rather draconically, “which causes … disaffection against India.

It will also be noted that all three offences are content-based restrictions on free speech i.e. limitations based purely on the subjects that the words deal in. Textually, these laws do not necessarily require an examination of the intent of the speaker, the impact of the words on the audience, or indeed the context in which the words are used.

Finally, the Report notes the views of the Special Rapporteur on Free Expression on hate speech and characterizing most efforts to counter them as “misguided”. However, the Report also “recognizes the importance of not letting hate speech go unchecked …” In one sense, the Special Rapporteur expressly rejects American First Amendment jurisprudence, which does not acknowledge hate speech as a permissible restriction to free speech. At the same time, the Report’s insistence that “the underlying causes should also be addressed” instead of being satisfied with mere prosecutions is a policy aspiration that needs serious thought in India.

This Report on violent extremism (as distinct from terrorism) is much-needed and timely. The strong human rights concerns espoused, with its attendant importance attached to a context-driven approach in prosecuting speech acts, are a sobering reminder about the many inadequacies of Indian terror law and its respect for fundamental rights.

Nakul Nayak was a Fellow at the Centre for Communication Governance from 2015-16.

[1] Para 24.

[2] Para 24.

[3] Para 38.

[4] Para 39.