How (not) to get away with murder: Reviewing Facebook’s live streaming guidelines

Introduction

The recent shooting in Cleveland live streamed on Facebook has brought the social media company’s regulatory responsibilities into question. Since the launch of Facebook Live in 2016, the service’s role in raising political awareness has been acknowledged. However, the service has also been used to broadcast several instances of graphic violence.

The streaming of violent content (including instances of suicide, murders and gang rapes) has raised serious questions about Facebook’s responsibility as an intermediary. While it is not technically feasible for Facebook to review all live videos while they’re being streamed or filter them before they’re streamed, the platform does have a routine procedure in place to take down such content. This post will visit the guidelines in place to take down live streamed content and discuss alternatives to the existing reporting mechanism.

What guidelines are in place?

Facebook has ‘community standards’ in place.  However, their internal regulation methods are unknown to the public. Live videos have to be in compliance with ‘community standards’, which specifies that Facebook will remove content relating to ‘direct threats’, self-injury’, ‘dangerous organizations’, ‘bullying and harassment’, ‘attacks on public figures’, ‘criminal activity’ and ‘sexual violence and exploitation’.

The company has stated that it ‘only takes one report for something to be reviewed’.  This system of review has been criticized since graphic content could go unnoticed without a report. In addition, this form of reporting would be unsuccessful since there is no mandate of ‘compulsory reporting’ for the viewers.  Incidentally, the Cleveland shooting video was not detected by Facebook until it was flagged as ‘offensive’, which was a couple of hours after the incident. The company has also stated that they are working on developing ‘artificial intelligence’ that could help put an end to these broadcasts. However, they currently rely on the reporting mechanism, where ‘thousands of people around the world’ review posts that have been reported against. The reviewers check if the content goes against the ‘community standards’ and ‘prioritize videos with serious safety implications’.

While deciding if a video should be taken down, the reviewers will also take the ‘context and degree’ of the content into consideration. For instance, content that is aimed at ‘raising awareness’, even if it displays violence, will be allowed. However, content that is celebrating such violence would be taken down. To demonstrate, when a live video of civilian Philando Castile being shot by a police officer in Minnesota went viral, Facebook kept the video up on their platform, stating that it did not glorify the violent act.

 Regulation

Other than the internal guidelines by which Facebook regulates itself, there haven’t been instances of government regulators, like the United States’ Federal Communications Commission intervening. Unlike the realm of television, where the FCC regulates content and deems material ‘inappropriate’, social media websites are protected from content regulation.

This brings up the question of intermediary liability and Facebook’s liability for hosting graphic content. Under American Law, there is a distinction between ‘publishers’ and ‘common carriers’. A common carrier only ‘enables communications’ and does not ‘publish content’. If a platform edits content, it is most likely a publisher. A ‘publisher’ has a higher level of responsibility for content hosted on their platform, unlike a ‘carrier’. In most instances, social media companies are covered under Section 230 of the Communications Decency Act, a safe harbor provision, by which they would not be held liable for third-party content.  However, questions have been raised about Facebook’s role as a ‘publisher’ or ‘common carrier’, and there seems to be no conclusive answer.

Conclusion

Several experts have considered possible solutions to this growing problem. Some believe that such features should be limited to certain partners and should be opened up to the public once additional safeguards and better artificial intelligence technologies are in place. In these precarious situations, enforcing stricter laws on intermediaries might not resolve the issue at hand. Some jurisdictions have ‘mandatory reporting’ provisions, specifically for crimes of sexual assault. In India, under Section 19 of the Protection of Children from Sexual Offences Act, 2012 ‘any person who has apprehension that an offence…is likely to be committed or has knowledge that such an offence has been committed’ has to report such an offence. In the context of cyber-crimes, this system of ‘mandatory reporting’ would shift the onus on the viewers and supplement the existing reporting system. Mandatory provisions of this nature do not exist in the United States where most of the larger social media companies are based.

Similarly, possible solutions should focus on strengthening the existing reporting system, rather than holding social media platforms liable.

Reviewing the Law Commission’s latest hate speech recommendations

Introduction

The Law Commission has recently released a report on hate speech laws in India. The Supreme Court in Pravasi Bhalai vs. Union of India  asked the Law Commission to recommend changes to existing hate speech laws, and to “define the term hate speech”. The report discusses the history of hate speech jurisprudence in India and in certain other jurisdictions. In addition, it stresses upon the difficulty of defining hate speech and the lack of a concise definition. In the absence of such a definition, certain ‘identifying criterion’ have been mentioned, to detect instances of hate speech. It also discusses the theories of Jeremy Waldron (the ‘dignity’ principle) and makes a case for protecting the interests of minority communities by regulating speech. In this regard, two new sections for the IPC have been proposed. They are as follows:

(i) Prohibiting incitement to hatred-

“153 C. Whoever on grounds of religion, race, caste or community, sex, gender identity, sexual orientation, place of birth, residence, language, disability or tribe –

(a)  uses gravely threatening words either spoken or written, signs, visible representations within the hearing or sight of a person with the intention to cause, fear or alarm; or

(b)  advocates hatred by words either spoken or written, signs, visible representations, that causes incitement to violence shall be punishable with imprisonment of either description for a term which may extend to two years, and fine up to Rs 5000, or with both.”.

(ii) Causing fear, alarm, or provocation of violence in certain cases.

“505 A. Whoever in public intentionally on grounds of religion, race, caste or community, sex, gender, sexual orientation, place of birth, residence, language, disability or tribe-

uses words, or displays any writing, sign, or other visible representation which is gravely threatening, or derogatory;

(i) within the hearing or sight of a person, causing fear or alarm, or;

(ii) with the intent to provoke the use of unlawful violence,

against that person or another, shall be punished with imprisonment for a term which may extend to one year and/or fine up to Rs 5000, or both”.

The author is of the opinion that these recommended amendments are vague and broadly worded and could lead to a chilling effect and over-censorship. Here are a few reasons why the recommendations might not be compatible with free speech jurisprudence:

  1. Three – part test

Article 10 of the European Convention on Human Rights lays down three requirements that need be fulfilled to ensure that a restriction on free speech is warranted. The Law Commission report also discusses this test; it includes the necessity of a measure being ‘prescribed by law’, the need for a ‘legitimate aim’ and the test of ‘necessity and proportionality’.

Under the ‘prescribed by law’ standard, it is necessary for a restriction on free speech to be ‘clear and not ambiguous’. For instance, a phrase like ‘fear or alarm’ (existing in Section 153A and Section 505) has been criticized for being ‘vague’. Without defining or restricting this term, the public would not be aware of what constitutes ‘fear or alarm’ and would not know how to comply with the law. This standard has also been reiterated in Shreya Singhal vs. Union of India, where it was held that the ambiguously worded Section 66A could be problematic for innocent people since they would not be aware as to “which side of the line they fall” towards.

  1. Expanding scope to online offences?

The newly proposed sections also mention that any ‘gravely threatening words within the hearing or sight of a person’ would be penalized. Presumably, the phrase ‘within the sight or hearing of a person’ broadens the scope of this provision and could allow online speech to come under the ambit of the IPC. This phrase is similar to the wording of Section 5 (1) of the Criminal Justice (Public Order) Act, 1986[1] in the United Kingdom, which penalizes “harassment, alarm or distress”. Even though the section does not explicitly mention that it would cover offences on the internet, it has been presumed to do so.[2]

Similarly, if the intent of the framers of Section 153C is to expand the scope to cover online offences, it might introduce the same issues as the omitted Section 66A of the IT Act did. Section 66A intended to penalize the transmission of information which was ‘menacing’ and also which promoted ‘hatred or ill will’. The over-breadth of the terms in the section led to scrapping it. Another reason for scrapping the section was the lowering of the ‘incitement’ threshold (discussed below). Even though the proposed Section 153C does not provide for as many grounds (hatred, ill will, annoyance, etc.), it does explicitly lower the threshold from ‘incitement’ to ‘fear or alarm’/’discrimination’.

  1. The standard of ‘hate speech’

 The report also advocates for penalizing the ‘fear or alarm’ caused by such speech, since it could potentially have the effect of ‘marginalizing a section of the society’. As mentioned above, it has been explicitly mentioned that the threshold of ‘incitement to violence’ should be lowered and factors like ‘incitement to discrimination’ should also be considered.

The Shreya Singhal judgment drew a distinction between ‘discussion, advocacy and incitement’, stating that a restriction justifiable under Article 19(1) (a) of the Constitution would have to amount to ‘incitement’ and not merely ‘discussion’ or ‘advocacy’. This distinction was drawn so that discussing or advocating ideas which could lead to problems with ‘public order’ or disturbing the ‘security of the state’ could be differentiated from ‘incitement’ which establishes more of a ‘causal connection’.

Similarly, if the words used contribute to causing ‘fear or alarm’, the threshold of ‘incitement’ would be lowered, and constitutionally protected speech could be censored.

Conclusion

Despite the shortcomings mentioned above, the report is positive in a few ways. It draws attention to important contemporary issues affecting minority communities and how speech is often used to mobilize communities against each other. It also relies on Jeremy Waldron’s ‘dignity principle’ to make a case for imposing differing hate speech standards to protect minority communities. In addition, the grounds for discrimination now include ‘tribe’ and ‘sexual orientation’ amongst others.

However, existing case laws, coupled with recent instances of censorship, could make the insertion of these provisions troubling. India’s relationship with free speech is already dire; the Press Freedom Index ranks the country at 133 (out of 180) and the Freedom on the Net Report states that India is ‘partly free’ in this regard. The Law Commission might need to reconsider the recommendations, for the sake of upholding free speech. Pravasi Bhalai called for sanctioning politicians speeches, but the recommendations made by the Law Commission might be far reaching and the effects could be chilling.

 

[1] Section 5- Harassment, alarm or distress.
(1)A person is guilty of an offence if he—
(a)uses threatening or abusive words or behaviour, or disorderly behaviour, or
(b)displays any writing, sign or other visible representation which is threatening or abusive,
within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby.

[2] David Wall, Cybercrime: The Transformation of Crime in the Information Age, Page 123, Polity.

Roundup of Sabu Mathew George vs. Union of India: Intermediary liability and the ‘doctrine of auto-block’

Introduction

In 2008, Sabu Matthew George, an activist, filed a writ petition to ban ‘advertisements’ relating to pre-natal sex determination from search engines in India. According to the petitioner, the display of these results violated Section 22 of the Pre-Natal Diagnostic Techniques (Regulation and Prevention of Misuse Act), 1994. From 2014-2015, the Supreme Court ordered the respondents to block these advertisements several times. Finally, on November 16, 2016, the Supreme Court ordered the respondents, Google, Microsoft and Yahoo to ‘auto-block’ advertisements relating to sex selective determination. They also ordered the creation of a ‘nodal agency’ that would provide search engines with the details of websites to block. The next hearing for this case is scheduled for February 16, 2017.

The judgment has been criticised for over-breadth and the censorship of legitimate content. We discuss some issues with the judgment below.

Are search engines ‘conduits’ or ‘content-providers’?

An earlier order in this case, dated December 4, 2012, states that the respondents argued that they “provided a corridor and did not have any control” over the information hosted on other websites.

There is often confusion surrounding the characterization of search engines as either ‘conduits’ or ‘content-providers’. A conduit is a ‘corridor’ for information, otherwise known as an intermediary. A content provider however, produces/alters the displayed content. It has been suggested by authors like Frank Pasquale that search engines (Google specifically) take advantage of this grey area by portraying themselves as conduits or content-providers, to avoid liability. For instance, Google will likely portray itself as a content-provider when it needs to claim First Amendment protection in the United States, and as a conduit for information when it needs to defend itself against First Amendment attacks. When concerns related to privacy arise, search engines attempt to claim editorial rights and freedom of expression. Conflictingly, when intellectual property matters or defamation claims arise, they portray themselves as ‘passive conduits’.

In the Indian context, there has been similar dissonance about the characterization of search engines. In the aftermath of the Sabu Mathew George judgment, the nature of search engines was debated by a few. One commentator has pointed out that the judgment would contradict the Supreme Court’s decision reading down Section 79(3)(b) of the Information Technology Act, 2008 (IT Act) in Shreya Singhal vs. Union of India, where the liability of intermediaries was restricted. Therefore, the commentator characterized search engines as passive conduits/intermediaries. According to the commentator, the Sabu Mathew George judgment would effectively hold intermediaries liable for content hosted unbeknownst to them. Another commentator has criticised this argument, stating that if Google willingly publishes advertisements through its AdWords system, then it is a publisher and not merely an intermediary. This portrays Google as a content-provider.

Sabu Mathew George defies existing legal standards 

As mentioned above, the Sabu Mathew George judgment contradicts the Supreme Court’s decision in Shreya Singhal, where the liability of intermediaries was read down under Section 79 (3) (b) of the IT Act. The Court in Shreya Singhal held that intermediaries would only be compelled to takedown content through court orders/government notifications. However, in the present case, the Supreme Court has repeatedly ordered the respondents to devise ways to monitor and censor their own content and even resort to ‘auto-blocking’ results.

The order dated November 16, 2016 also contradicts the Blocking Rules under the Information Technology Act, 2008. In the order, the Supreme Court directed the Center to create a ‘nodal agency’ which would allow people to register complaints against websites violating Section 22 of the PNDT Act. These complaints would then be passed on the concerned search engine in the manner described below-

Once it is brought to the notice of the Nodal Agency, it shall intimate the concerned search engine or the corridor provider immediately and after receipt of the same, the search engines are obliged to delete it within thirty-six hours and intimate the Nodal Agency.”

The functioning of this nodal agency would circumvent the Information Technology Act Blocking Rules. Under the Blocking Rules, the Committee for Examination of Requests reviews each blocking request and verifies whether it is in line with Section 69 of the IT Act. The Sabu Mathew George order does not prescribe a similar review system. While the author acknowledges that the nodal agency’s blocking rules are not a statutory mandate, its actions could still lead to over-blocking.

Organic search results’ and ‘sponsored links

One important distinction in this case is between ‘organic search results’ and ‘sponsored links’. A submission by MeitY (DeitY) explaining the difference between the two was not addressed by the Supreme Court in the order dated December 4, 2014.

Section 22 of the PNDT Act criminalizes the display of ‘advertisements’, but does not offer a precise definition for the term. The respondents argued that ‘advertisement’ would relate to ‘sponsored links’ and not ‘organic search results’. As per the order dated September 19, 2016, Google and Microsoft agreed to remove ‘advertisements’ and stated that search results should not be contemplated under Section 22 since they are not ‘commercial communication’. However, on November 16, 2016, the Supreme Court stated that the block would extend to both ‘sponsored links’ and ‘organic search results’.  The respondents expressed concern against this rationale stating that legitimate information on pre-natal sex determination would be unavailable, and that the ‘freedom of access to information’ would be restricted. The Court stated that this freedom could be curbed for the sake of the larger good.

The ‘doctrine of auto-block’

By the order dated September 19, 2016, the Court discussed the ‘doctrine of auto block’ and the responsibility of the respondents to block illegal content themselves. In this order, the Court listed roughly 40 search terms and stated that the respondents should ensure that any attempt at looking up these terms would be ‘auto-blocked’. The respondents also agreed to disable the ‘auto complete’ feature for these terms.

Google has blocked search terms from their auto-complete system in several other countries, often with little success. This article points out that illegal search terms relating to child pornography have been allowed on auto-complete while more innocuous terms like ‘homosexual’ have been blocked by Bing, proving that this system of blocking has several discrepancies.

Other than a chilling effect on free speech, disabling auto complete can also lead to other adverse effects. In one instance, the owner of a sex-toy store complained about her business not benefitting from the autocomplete feature, like several others had. She stated that …Google is … making it easier for people to find really specific information related to a search term. In a sense it’s like we’re not getting the same kind of courtesy of that functionality. Similarly, several legitimate websites discussing pre-natal sex determination might lose potential readers or viewers if ‘autocomplete’ is disabled.

Conclusion

The author would like to make two broad suggestions. First, the functioning of the nodal agency should be revisited. The recommended system lacks accountability and transparency and will certainly lead to over-blocking and will also lead to a chilling effect.

Second, search engines should not be given over-arching powers to censor their own websites. It is well-established that this leads to over-censorship. In addition to contradicting Section 79(3)(b) of the IT Act, the Court would also be delegating judicial authority to a private search engine.

According to a study conducted by The Centre for Internet & Society, Bangalore in January, 2015, searching for keywords relating to pre-natal sex determination on Google, Yahoo and Bing did not yield a large number of ‘organic search results’ and ‘sponsored links’ that would violate Section 22 of the PNDT Act. From 2015-2016, search engines have presumably followed Supreme Court orders and filtered out illegal search results and advertisements. Since instances of illegal search results and advertisements being displayed were not rampant to begin with,  there seems to be no urgent need to impose strict measures like ‘auto-blocks’.

The Supreme Court seems to be imposing similarly arbitrary rules upon search engines in other judgments. Recently, the Court ordered Google, Microsoft and Yahoo to create a ‘firewall’ that would prevent illegal videos from being uploaded to the internet.  They cited the example of China creating a similar firewall to prove the feasibility of the order.

Speaking Out Against Online Extremism: Counter-Speech and its Effectiveness

Introduction

This post is a part of a series on online extremism, where we discuss the regulatory and legal issues surrounding the growing problem. This current post focuses counter-speech, which is one of the regulatory techniques.

What is Counter Speech?

Counter-speech or counter narratives in content of extremism have been defined as “messages that offer a positive alternative to extremist propaganda, or alternatively aim to deconstruct or delegitimise extremist narratives”.

This definition has been broken down into three categories to explain the different approaches:

a) Counter speech that is intended to negate extremist speech.

b) Counter speech focussed on positive narratives.

Later on in the post, we will discuss an initiative which addresses issues faced by young Muslims related to cultural identity. This narrative does not necessarily focus on distilling biases, rather initiating discussions on related issues.

c) Informative counter-speech. This narrative focuses on distilling extremist propaganda. Unlike the first category, this category intends to negate misinterpretations perpetuated by the extremists. This is usually related to organizations or individuals in the public eye.

For the purposes of this post, counter-speech is limited to counter-narratives on online platforms. Speech is however not limited to text messages or videos and can extend to various other mediums, like the FBI’s interactive game ‘Don’t Be a Puppet’.

Why Counter-Speech?

The United Nations Security Council in May 2016 discussed the necessity of an international framework to combat online extremism. During the meeting, the dangers of extremists exploiting social media platforms and the possible remedies that should follow were discussed. The discussion stressed on the need to ‘safeguard the freedom of the press’ by not resorting to excessive censorship. The forthcoming international framework could benefit from utilizing counter speech, asa viable alternative to censorship.

Using counter speech or employing counter narratives to fight online extremism might subvert the criticism faced by other anti-extremist measures. As discussed in our previous post, internal regulation and state controlled regulation both run the risk of ‘over-censorship’.

Counter-speech strategy would not rely on ‘taking down’ content. Taking down or blocking access to content only acts as a momentary relief, since the same content can crop up anywhere else. In some instances, when extremist accounts on Twitter and WhatsApp were taken down, new accounts emerged shortly or propaganda moved to encrypted platforms.

The UN Special Rapporteur on Freedom of Expression stated that “repressive approaches would have the reverse effect of reinforcing the narrative of extremist ideologies”

In addition, counter-speech would treat the root cause of online extremism, indoctrination. The UN Special Rapporteur also stated that the ‘blocking websites’ would not be the right approach and “strategies addressing the root causes of such viewpoints” should be addressed.

A platform which allows open discussions or debates about beliefs might lead to a more effective anti-extremism regime.

Organizations utilizing counter speech

The United States government has initiated a few counter-speech programmes. The Bureau of Counter terrorism and Countering Violent Extremism has introduced initiatives like the ‘Think again turn away’ campaign. This campaign focuses on spreading counter-narratives on YouTube, Twitter and other such platforms. The Federal Bureau of Investigation (FBI) has launched an interactive game to sensitize people on the dangers of extremism. ‘Don’t be a puppet’ aims to educate young people on questions like ‘What are known violent extremist groups?’ and ‘How do violent extremists make contact?’.

There are several counter-speech initiatives, being operated by private bodies.  A few, namely ExitUSA and Average Mohamed have been studied by the Institute for Strategic Dialogue. ExitUSA produces videos intended for ‘white supremacists’. Their approach is informative and intends to negate popular extremist propaganda. Average Mohamed is an initiative for young Somalians in the United States. Among the videos produced by them, a few, titled ‘Identity in Islam’ and a ‘Muslim in the West’ intend to address other cultural issues faced by young Muslims. Through their animated videos surrounding protagonist ‘Average Mohamed’, a young boy in the United States, they initiate positive counter-speech among viewers.

Speech Gone Wrong- Shortcomings of Counter-Speech

The previously mentioned ‘Don’t be a puppet’ initiative has been criticized for employing bigoted narratives themselves. Their counter-narrative has been criticized for being anti-Islamic.

In addition to claims of bigotry, a few of the government led initiatives have also been criticized for being opaque. Earlier this year, the White House organized a summit on Countering Violent Extremism (CVE), during which multi-million dollar plans were initiated. Following the summit, a Senate Sub-committee was instituted along and a sizeable proportion of the 2017 fiscal budget was allocated to CVE. However, lawsuits have been filed under the Freedom of Information Act, demanding details about the initiatives.

More importantly, the impact or success of counter-speech has not been substantiated. In the ISD study for instance, the researchers have stated that determining the success or outcome of counter-speech initiatives is “extremely difficult”. Faced with limitations, their methodology is based onthe ‘sustained engagement’ they had with the users. This engagement was measured by comments, tweets and messages exchanged between the counter-speech organization and the user.

Lastly, referring back to our previous post, some private organizations have also removed content under the guise of counter-speech. Facebook in collaboration with the Online Civil Courage Initiative (OCCI) vowed to employ counter-speech online, stating that it was more effective than censorship. However, as evidenced from OCCI’s manual, the organization was allowed to takedown ‘antagonistic’ content, leading to censorship.

Future of Counter Speech

While counter-speech suffers from fewer setbacks as compared to other regulatory techniques, it needs more transparency to function better. As of now, there are no universally applicable guidelines for counter-speech. Guidelines and rules could help establish transparency and avoid instances of censorship or bigotry from disseminating.

Online Extremism and Hate Speech – A Review of Alternate Regulatory Methods

Introduction

Online extremism and hate speech on the internet are growing global concerns. In 2015, the EU signed a code of conduct with social media companies including Facebook, Google and Twitter to effectively regulate hate speech on the internet. The code, amongst other measures, discussed stricter sanctions on intermediaries (social media companies) in the form of a ‘notice and takedown’ regime, a practice which has been criticised for effectively creating a ‘chilling’ effect and leading to over-censorship.

While this system is still in place, social media companies are attempting to adopt alternative regulatory methods. If companies could ensure that they routinely track their websites for illegal content, before government notices are issued, this could save them time and money. This post will attempt to offer some insight into alternative modes of regulation used by social media companies.

 YouTube Heroes – Content Regulation by Users

YouTube Heroes was launched in September, 2016 with the aim of efficiently regulating content. Under this initiative, YouTube users are allowed to ‘mass-flag’ content that goes against the Community Guidelines. The Community Guidelines specifically prohibit instances of hate speech. As per the Guidelines, content that “promotes violence or hatred against individuals based on certain attributes would amount to hate speech”. These ‘attributes’ include but are not limited to race, gender and religion.

‘Mass-flagging’ is just one of the many tools available to a YouTube Hero. The system is based on points and ranks, with users generating points for helping translate videos and for flagging inappropriate content. As they climb up the ranking system, users become privy to exclusive deals, like the ability to directly contact YouTube staff. ‘Mass-flagging’ is in essence the same as flagging a video, an option that YouTube already offered. However, the incentive of gaining access to private moderator forums and YouTube staff could lead to users flagging videos for extraneous reasons. While ‘mass-flagged’ videos are reviewed by YouTube moderators before being taken down, the initiative has still raised concerns.

It has been criticised for giving free rein to users, who may flag content because of personal biases, leading to ‘harassment campaigns’. Popular YouTube users have panned YouTube heroes, apprehending the possibility of their videos being targeted by ‘mobs’. Despite the review system in place, users have also expressed doubts about YouTube’s ability to accurately take down flagged content. Since the initiative is in its testing stage, it is difficult to determine what its outcome could be.

Facebook’s Online Civil Courage Initiative – Counter Speech

Governmental authorities across the world have been attempting to curb hate speech and online extremism in myriad ways. For instance, in November, 2015, an investigation involving one of Facebook’s European Managing Directors was launched. The Managing Director was accused of letting Facebook host hate speech. As the investigation drew to an end, Facebook representatives were not implicated. However, this investigation marked an increase in international pressure to effectively deal with hate speech.

Due to growing pressure from governmental authorities, Facebook began to  ‘outsource’ content removal.  In January of 2016, a German company called ‘Arvato’, was delegated the task of reviewing and taking down reported content, along with Facebook’s Community Operations Team. There is limited public information on the terms of service or rules Arvato is bound by. In the absence of any such information, ‘outsourcing’ could contribute to a private censorship regime. With no public guidelines in place, the outsourcing process is not transparent or accountable.

Additionally, Facebook has been working with other private bodies to regulate content online. Early in 2016, Facebook, in partnership with several NGOs, launched the Online Civil Courage Initiative (OCCI) to combat online extremism with counter-speech.   COO Sheryl Sandberg said that ‘censorship’ would not put an end to hate speech and that counter-speech would be a far more effective mode of regulation. Under this initiative, civil societies and NGO’s are ‘rewarded’ with ad credits, marketing resources, and strategic supportfor countering speech online.

It is pertinent to note that the Information Pack on Counter Speech Engagement is the only set of guidelines made public by OCCI. These guidelines provide information to plan a counter speech campaign. An interesting aspect of the information pack is the section on ‘Responding and Engaging during a campaign’. Under this section, comments are categorised as ‘supportive, negative, constructive, antagonistic’. A table suggests how different categories of comments should be ‘engaged with’. Surprisingly, ‘antagonistic’ comments should be ‘ignored, hidden or deleted’.  The information pack does not attempt to define any of the above categories. These vaguely worded guidelines could lead to confusion amongst NGOs. While studies have shown that counter-speech might be the most effective way to deal with online extremism, OCCI would have to make major changes to reach the goals of the counter-speech movement.

In October 2016, Facebook has reportedly come under the radar again. A German Federal Minister has stated that Facebook was still not effectively dealing with hate speech targeted at refugees and another investigation might be in the pipeline.

Conclusion

 It is yet to be seen whether the alternative regulatory methods adopted by social media companies will effectively deal with hate speech and online extremism.

It is important to note that social media companies are ‘outsourcing’ internal regulation to private bodies or users (YouTube Heroes, Arvato and OCCI). These private bodies might amplify the problems being faced by the intermediary liability system, which could lead to ‘over-censorship’. The system has been criticised for its ‘notice and takedown’ regime. Non-compliance of these takedown orders would attract strict sanctions. Fear of these sanctions could lead intermediaries to takedown content which could be in grey areas, but are not illegal.

However, under the internal regulation method, social media companies will continue to function under the fear of state pressure. Private bodies like Arvato and NGOs in affiliation with OCCI will also regulate content, with the incentive of receiving ‘advertisement credit’ and ‘points’.  This could lead to over-reporting for the sake of incentives. Coupled with pressure from the state, this might lead to a ‘chilling’ effect.

In addition, some of these private bodies do not operate in a transparent manner. For instance, providing public information on Arvato’s content regulation activities and the guidelines they are bound by would help create a far more accountable system. Further, the OCCI needs to have clearer, well-defined policies to fulfill the objectives of disseminating counter-speech.

 

 

Government Advertisements and Freedom of Press: Examining the Rajasthan Patrika case

By Arpita Biswas

Rajasthan Patrika, a highly popular newspaper, has seen a sharp decline in government advertisement allocation over the last year. According to reports, the alleged reason for the decline was the political ideology of the Patrika, which did not favour the state government. The state government however claimed that the decline was necessary to correct the existing imbalance in advertisement allocation. The “fight for survival” drew to an end earlier this month when the Supreme Court ordered the Rajasthan government to allocate a higher percentage of advertisements to the Patrika.

Advertisement allocation is necessary for newspapers to reduce their cost of production. Declining advertising revenues would force publishers to pass on the cost to readers, which would negatively impact circulation. This practice has been held to be against the ‘Freedom of the Press’.

While not explicitly enumerated, Freedom of the Press is a constitutionally protected right under Article 19 of the Constitution (Romesh Thappar v State of Madras). International instruments like the American Convention on Human Rights prohibit ‘indirect’ means of censorship (Article 13.3). The Office of the Special Rapporteur for Freedom of Speech and Expression (Inter-American Commission on Human Rights) in the Principles on the Regulation of Government Advertising and Freedom of Expression mentions curbing advertisement allocation as an ‘indirect’ means of censorship.

Like Rajasthan Patrika, Ushodaya Publications, a Supreme Court case, dealt with a similar set of facts in 1981. In Ushodaya Publications vs. Government of Andhra Pradesh, the petitioners claimed that the ‘Advertisement Procedure’ stipulated in the Government Order served as a restraint on the Freedom of Speech and Expression under Article 19 of the Constitution. The Order specified certain production and circulation standards, which the publishers considered restrictive. The petitioners also claimed that absence of a “right to notice and hearing…a machinery for redress or correction of an adverse decision by way of appeal or revision” rendered the system vague and arbitrary. The courts however held that the mechanism was not unconstitutional and merely streamlined the advertisement allocation process through the Director of Information and Public Relations. The discretion of the Director was not discriminatory since advertisements were ‘commercial speech’.  Advertisement allocation was not considered integral to the Freedom of Press.

The new DAVP Policy

Roughly three decades later, the Freedom of Press related concerns raised in Ushodaya remain unaddressed.  

The rules governing advertisement allocation are over-looked by the Directorate of Advertising and Visual Publicity (DAVP). The DAVP is a nodal agency through which government bodies streamline the advertisement allocation process.  In June of 2016, the Information and Broadcasting Ministry published a new policy for the DAVP , the National Advertisement Policy  . Among other concerns, the Policy was introduced to “focus on transparency and equity in release of government ads”.

The Policy allocates ads to ‘small’, ‘medium’ and ‘big’ newspapers, based on their circulation numbers. Following a ‘scorecard’ system, the higher number of points a newspaper has, the greater percentage of ads it will be allocated. Rajasthan Patrika happens to be an ‘empanelled’ newspaper which would score highly. (It is a member of the Audit Bureau of Circulation and is one of the most widely read papers in the country, which leads us to believe that it must be widely circulated.)

The process of ‘empanelling’ is multi-pronged and extensive. With six criteria, ranging from approval from the Audit Bureau of Circulation and the Registrar of Newspapers of India to an annual subscription payment to the Press Council of India, only select newspapers are empanelled. The policy faced criticism from small and medium enterprises for favouring larger newspapers. The renewed empanelling process would seemingly favour existing members of the ABC/RNI.

Criticism of the Policy

In addition to the criticism faced by smaller newspapers, the policy has a few inherent flaws. Similar to the Government Order in the Ushodaya case, the DAVP Policy also does not have a redressal mechanism. The ‘minimum print area’ criterion under Clause 11 of the Policy is vaguely reminiscent of Bennett Coleman vs. Union of India where the permissible number of pages were regulated under the Newsprint Policy of 1972-73. The courts in this case held the limit on the number of pages to be unconstitutional. ‘Minimum print area’ could severely restrict circulation of papers, as well.

Several of the clauses rely on being vetted by the Audit Bureau of Circulation (ABC) or Registrar of Newspapers for India (RNI). These agencies audit the circulation of and verify the legitimacy of the newspapers. The ABC also requires a newspaper to be registered under the RNI to be considered for membership at the bureau. However, reports of fake newspapers/journals registered by the RNI were afloat last year. These ‘fake’ publications, which had either printed the same content multiple times or had not printed at all, were being allocated government ad revenue. A huge discretion between the number of newspapers circulated in the districts and those empanelled by the DAVP was found. Amidst allegations of corruption, DAVP and RNI’s conduct renders the whole system of ‘empanelling’ questionable.  

The ABC Manual paints a murky picture as well. Clause 14.7 states that non-submission of books and records “will lead to non – consideration for certification”. While in the same document, Article 5A gives ABC the authority to revoke membership in the event of non-submission of circulation figures. Similarly, in the case of the ‘fake’ publications, the RNI admitted they were ‘not empowered to take action’.

Conclusion

‘Empanelling’ under the DAVP is a restrictive practice and would lead to curtailing freedom of speech and expression. An ‘indirect’ means of censorship, the multi-pronged empanelment process is subject to corruption and arbitrariness. The DAVP Policy renders the allocation system vulnerable and open to misuse. The need of the hour is a far more coherent and transparent set of guidelines.