How (not) to get away with murder: Reviewing Facebook’s live streaming guidelines

Introduction

The recent shooting in Cleveland live streamed on Facebook has brought the social media company’s regulatory responsibilities into question. Since the launch of Facebook Live in 2016, the service’s role in raising political awareness has been acknowledged. However, the service has also been used to broadcast several instances of graphic violence.

The streaming of violent content (including instances of suicide, murders and gang rapes) has raised serious questions about Facebook’s responsibility as an intermediary. While it is not technically feasible for Facebook to review all live videos while they’re being streamed or filter them before they’re streamed, the platform does have a routine procedure in place to take down such content. This post will visit the guidelines in place to take down live streamed content and discuss alternatives to the existing reporting mechanism.

What guidelines are in place?

Facebook has ‘community standards’ in place.  However, their internal regulation methods are unknown to the public. Live videos have to be in compliance with ‘community standards’, which specifies that Facebook will remove content relating to ‘direct threats’, self-injury’, ‘dangerous organizations’, ‘bullying and harassment’, ‘attacks on public figures’, ‘criminal activity’ and ‘sexual violence and exploitation’.

The company has stated that it ‘only takes one report for something to be reviewed’.  This system of review has been criticized since graphic content could go unnoticed without a report. In addition, this form of reporting would be unsuccessful since there is no mandate of ‘compulsory reporting’ for the viewers.  Incidentally, the Cleveland shooting video was not detected by Facebook until it was flagged as ‘offensive’, which was a couple of hours after the incident. The company has also stated that they are working on developing ‘artificial intelligence’ that could help put an end to these broadcasts. However, they currently rely on the reporting mechanism, where ‘thousands of people around the world’ review posts that have been reported against. The reviewers check if the content goes against the ‘community standards’ and ‘prioritize videos with serious safety implications’.

While deciding if a video should be taken down, the reviewers will also take the ‘context and degree’ of the content into consideration. For instance, content that is aimed at ‘raising awareness’, even if it displays violence, will be allowed. However, content that is celebrating such violence would be taken down. To demonstrate, when a live video of civilian Philando Castile being shot by a police officer in Minnesota went viral, Facebook kept the video up on their platform, stating that it did not glorify the violent act.

 Regulation

Other than the internal guidelines by which Facebook regulates itself, there haven’t been instances of government regulators, like the United States’ Federal Communications Commission intervening. Unlike the realm of television, where the FCC regulates content and deems material ‘inappropriate’, social media websites are protected from content regulation.

This brings up the question of intermediary liability and Facebook’s liability for hosting graphic content. Under American Law, there is a distinction between ‘publishers’ and ‘common carriers’. A common carrier only ‘enables communications’ and does not ‘publish content’. If a platform edits content, it is most likely a publisher. A ‘publisher’ has a higher level of responsibility for content hosted on their platform, unlike a ‘carrier’. In most instances, social media companies are covered under Section 230 of the Communications Decency Act, a safe harbor provision, by which they would not be held liable for third-party content.  However, questions have been raised about Facebook’s role as a ‘publisher’ or ‘common carrier’, and there seems to be no conclusive answer.

Conclusion

Several experts have considered possible solutions to this growing problem. Some believe that such features should be limited to certain partners and should be opened up to the public once additional safeguards and better artificial intelligence technologies are in place. In these precarious situations, enforcing stricter laws on intermediaries might not resolve the issue at hand. Some jurisdictions have ‘mandatory reporting’ provisions, specifically for crimes of sexual assault. In India, under Section 19 of the Protection of Children from Sexual Offences Act, 2012 ‘any person who has apprehension that an offence…is likely to be committed or has knowledge that such an offence has been committed’ has to report such an offence. In the context of cyber-crimes, this system of ‘mandatory reporting’ would shift the onus on the viewers and supplement the existing reporting system. Mandatory provisions of this nature do not exist in the United States where most of the larger social media companies are based.

Similarly, possible solutions should focus on strengthening the existing reporting system, rather than holding social media platforms liable.

Reviewing the Law Commission’s latest hate speech recommendations

Introduction

The Law Commission has recently released a report on hate speech laws in India. The Supreme Court in Pravasi Bhalai vs. Union of India  asked the Law Commission to recommend changes to existing hate speech laws, and to “define the term hate speech”. The report discusses the history of hate speech jurisprudence in India and in certain other jurisdictions. In addition, it stresses upon the difficulty of defining hate speech and the lack of a concise definition. In the absence of such a definition, certain ‘identifying criterion’ have been mentioned, to detect instances of hate speech. It also discusses the theories of Jeremy Waldron (the ‘dignity’ principle) and makes a case for protecting the interests of minority communities by regulating speech. In this regard, two new sections for the IPC have been proposed. They are as follows:

(i) Prohibiting incitement to hatred-

“153 C. Whoever on grounds of religion, race, caste or community, sex, gender identity, sexual orientation, place of birth, residence, language, disability or tribe –

(a)  uses gravely threatening words either spoken or written, signs, visible representations within the hearing or sight of a person with the intention to cause, fear or alarm; or

(b)  advocates hatred by words either spoken or written, signs, visible representations, that causes incitement to violence shall be punishable with imprisonment of either description for a term which may extend to two years, and fine up to Rs 5000, or with both.”.

(ii) Causing fear, alarm, or provocation of violence in certain cases.

“505 A. Whoever in public intentionally on grounds of religion, race, caste or community, sex, gender, sexual orientation, place of birth, residence, language, disability or tribe-

uses words, or displays any writing, sign, or other visible representation which is gravely threatening, or derogatory;

(i) within the hearing or sight of a person, causing fear or alarm, or;

(ii) with the intent to provoke the use of unlawful violence,

against that person or another, shall be punished with imprisonment for a term which may extend to one year and/or fine up to Rs 5000, or both”.

The author is of the opinion that these recommended amendments are vague and broadly worded and could lead to a chilling effect and over-censorship. Here are a few reasons why the recommendations might not be compatible with free speech jurisprudence:

  1. Three – part test

Article 10 of the European Convention on Human Rights lays down three requirements that need be fulfilled to ensure that a restriction on free speech is warranted. The Law Commission report also discusses this test; it includes the necessity of a measure being ‘prescribed by law’, the need for a ‘legitimate aim’ and the test of ‘necessity and proportionality’.

Under the ‘prescribed by law’ standard, it is necessary for a restriction on free speech to be ‘clear and not ambiguous’. For instance, a phrase like ‘fear or alarm’ (existing in Section 153A and Section 505) has been criticized for being ‘vague’. Without defining or restricting this term, the public would not be aware of what constitutes ‘fear or alarm’ and would not know how to comply with the law. This standard has also been reiterated in Shreya Singhal vs. Union of India, where it was held that the ambiguously worded Section 66A could be problematic for innocent people since they would not be aware as to “which side of the line they fall” towards.

  1. Expanding scope to online offences?

The newly proposed sections also mention that any ‘gravely threatening words within the hearing or sight of a person’ would be penalized. Presumably, the phrase ‘within the sight or hearing of a person’ broadens the scope of this provision and could allow online speech to come under the ambit of the IPC. This phrase is similar to the wording of Section 5 (1) of the Criminal Justice (Public Order) Act, 1986[1] in the United Kingdom, which penalizes “harassment, alarm or distress”. Even though the section does not explicitly mention that it would cover offences on the internet, it has been presumed to do so.[2]

Similarly, if the intent of the framers of Section 153C is to expand the scope to cover online offences, it might introduce the same issues as the omitted Section 66A of the IT Act did. Section 66A intended to penalize the transmission of information which was ‘menacing’ and also which promoted ‘hatred or ill will’. The over-breadth of the terms in the section led to scrapping it. Another reason for scrapping the section was the lowering of the ‘incitement’ threshold (discussed below). Even though the proposed Section 153C does not provide for as many grounds (hatred, ill will, annoyance, etc.), it does explicitly lower the threshold from ‘incitement’ to ‘fear or alarm’/’discrimination’.

  1. The standard of ‘hate speech’

 The report also advocates for penalizing the ‘fear or alarm’ caused by such speech, since it could potentially have the effect of ‘marginalizing a section of the society’. As mentioned above, it has been explicitly mentioned that the threshold of ‘incitement to violence’ should be lowered and factors like ‘incitement to discrimination’ should also be considered.

The Shreya Singhal judgment drew a distinction between ‘discussion, advocacy and incitement’, stating that a restriction justifiable under Article 19(1) (a) of the Constitution would have to amount to ‘incitement’ and not merely ‘discussion’ or ‘advocacy’. This distinction was drawn so that discussing or advocating ideas which could lead to problems with ‘public order’ or disturbing the ‘security of the state’ could be differentiated from ‘incitement’ which establishes more of a ‘causal connection’.

Similarly, if the words used contribute to causing ‘fear or alarm’, the threshold of ‘incitement’ would be lowered, and constitutionally protected speech could be censored.

Conclusion

Despite the shortcomings mentioned above, the report is positive in a few ways. It draws attention to important contemporary issues affecting minority communities and how speech is often used to mobilize communities against each other. It also relies on Jeremy Waldron’s ‘dignity principle’ to make a case for imposing differing hate speech standards to protect minority communities. In addition, the grounds for discrimination now include ‘tribe’ and ‘sexual orientation’ amongst others.

However, existing case laws, coupled with recent instances of censorship, could make the insertion of these provisions troubling. India’s relationship with free speech is already dire; the Press Freedom Index ranks the country at 133 (out of 180) and the Freedom on the Net Report states that India is ‘partly free’ in this regard. The Law Commission might need to reconsider the recommendations, for the sake of upholding free speech. Pravasi Bhalai called for sanctioning politicians speeches, but the recommendations made by the Law Commission might be far reaching and the effects could be chilling.

 

[1] Section 5- Harassment, alarm or distress.
(1)A person is guilty of an offence if he—
(a)uses threatening or abusive words or behaviour, or disorderly behaviour, or
(b)displays any writing, sign or other visible representation which is threatening or abusive,
within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby.

[2] David Wall, Cybercrime: The Transformation of Crime in the Information Age, Page 123, Polity.

Censorship & certification – Outlining the CBFC’s role under law

The Central Board of Film Certification (CBFC) functions as the primary body certifying films for public exhibition in India. It is guided by the Cinematograph Act, 1952, and various rules and guidelines in determining the nature of certification to be granted to a film. However, over the past few months, reports about the CBFC’s alleged overreach – moving from certification of films to moral policing, for instance, by denying certification to films which address LGBTQ issues – have made the news.  This post outlines the legal framework within which the CBFC operates and discuss the prospects for change within this framework.

The CBFC was constituted under the Cinematograph Act, 1952 (Act), which aims to provide for the certification of cinematograph films for exhibition. Specifically, the CBFC was set up for the purpose of ‘sanctioning films for public exhibition’. The law however, also allows the CBFC to require modifications to be made to a film before providing such sanction / certification.

Over time, the CBFC has increasingly used this power to direct cuts in films for various reasons, leading to it being commonly referred to as the ‘censor board’. In recent months, the CBFC has stirred up controversy in relation to certification (or the lack thereof), of films with subject matter ranging from feminism / women’s empowerment and LGBTQ issues, to the Indian government’s demonetisation drive. The increasing possibility that a film will not even be granted certification for public exhibition, has led to fears that self-censorship will become a norm.

This fear seems to have permeated into the online video streaming industry already. Today, it isn’t clear whether streaming service providers are required to abide by the certification norms under the Act. While streaming platforms differ in their approach, and some providers choose to stream unedited i.e. ‘un-censored’ content, others are choosing to make only certified versions of films available online. There have also been controversial claims of service providers choosing to edit / censor content beyond the requirements of the CBFC.

The legal framework within which the CBFC operates is outlined below.

As described above, the CBFC is the sanctioning body which certifies films for public exhibition. The Act also allows for the setting up of regional centers or ‘advisory panels’ to assist the CBFC in its functions.

The Act provides that any person who wishes to exhibit a film should make an application to the CBFC for certification. The CBFC may (after examining the film, or having it examined):

  • sanction the film for unrestricted public exhibition, subject to requiring a caution to be provided stating that parents / guardians may consider whether a film is suitable for viewing by a child if required (i.e. grant a U or UA certificate)
  • sanction the film for public exhibition restricted to adult viewers (i.e. grant an A certificate)
  • sanction the film for public exhibition restricted to members of a certain profession or class of persons based on the nature of the film (i.e. grant an S certificate)
  • direct that certain modifications are made to the film before sanctioning the film for exhibition as described above, or
  • refuse to sanction the film for public exhibition.

The Act, as well as the Cinematograph (Certification) Rules, 1983, also provide detailed procedures for the appointment of members of the CBFC and the advisory panels, and appellate bodies, applications for certification, and appeals to the decision of the CBFC. The Act also provides for revisionary powers of the Central government in relation to the decisions of the CBFC.

In addition to the above, the Act provides principles on the basis of which the CBFC may refuse to certify a film – namely, “if a film or any part of it is against the interests of the sovereignty and integrity of India, security of the state, friendly relations with foreign states, public order, decency or morality, or involves defamation or contempt of court or is likely to incite the commission of an offence”.

These principles are further supplemented by the certification guidelines issued by the Central Government in 1991, in accordance with the powers granted to it under the Act.

These guidelines provide five objectives for film certification under the Act: (a) the medium of film remains responsible and sensitive to the values and standards of society; (b) artistic expression and creative freedom are not unduly curbed; (c) certification is responsive to social changes; (d) the medium of film provides clean and healthy entertainment; and (e) the film is of aesthetic value and cinematically of a good standard.

In order to meet these objectives, the guidelines require the CBFC to ensure that films do not contain (a) scenes that glorify / justify activities such as violence, drinking, smoking or drug addiction, (b) scenes that denigrate women, (c) scenes that involve sexual violence or depict sexual perversions, or (d) scenes that show violence against children, among many others.

The language used in many of these guidelines, while perhaps well intended, is vague, and allows for wide discretion in certification subject entirely to the sensibilities of the individual members of the CBFC.

In 2016, the Ministry of Information & Broadcasting set up a committee to evolve broad, but clear guidelines/ procedures to guide the CBFC in the certification of films. The committee was headed by noted film maker Mr. Shyam Benegal. The committee, in its report, has expressed the view that it is not for the CBFC to act as a ‘moral compass’, and decide on what constitutes glorification or promotion of certain issues.

The committee’s report suggests that the only function of the CBFC should be to determine which category of viewers a film can be exhibited to. The committee’s report has suggested new guidelines, with the following objectives: (i) children and adults are protected from potentially harmful or otherwise unsuitable content; (ii) audiences (and parents / those responsible for children) are empowered to make informed viewing decisions; (iii) artistic expression and creative freedom are not unduly curbed in the classification of films; (iv) the process of certification is responsive to social changes.

The committee’s recommendations are yet to be implemented, however, news reports suggest that work is currently underway to modify the new guidelines suggested in the report.

It is interesting to note that the committee’s report does not address the issue of certification requirements for films available on online streaming platforms. In March 2016, the CBFC had suggested that it would require all or film-makers, producers, and directors in India to sign an undertaking stating that they would not share with / release ‘excised portions of a feature or a film to anybody’, including streaming service providers.An affidavit to this effect was accepted by the Punjab & Haryana High Court, which suggested in its order that such steps would be sufficient to ensure that ‘censored’ content would not be available. However, later that year, the Ministry of Information and Broadcasting confirmed in a response to an RTI application, that they do not intend to regulate or censor online content.

The Supreme Court Hears Sabu Mathew George v. Union of India – Another Blow for Intermediary Liability

The Supreme Court heard arguments in Sabu Mathew George v. Union of India today. This writ petition was filed in 2008, with the intention of banning ‘advertisement’ offering sex selective abortions and related services, from search engine results. According to the petitioner, these advertisements violate Section 22 of the Pre-Conception and Pre-Natal Diagnostic Techniques (Regulation and Prevention of Misuse Act), 1994 (‘PCPNDT Act’) and consequently, must be taken down.

A comprehensive round up of the issues involved and the Court’s various interim orders can be found here. Today’s hearing focused mainly on three issues – the setting up of the Nodal Agency that is entrusted with providing details of websites to be blocked by search engines, the ambit and scope of the word ‘advertisement’ under the PCPNDT Act and thirdly, the obligation of search engines to find offending content and delete it on their own, without a government directive or judicial order to that effect.

Appearing for the Central Government, the Solicitor General informed the Court that as per its directions, a Nodal Agency has now been constituted. An affidavit filed by the Centre provided details regarding the agency, including contact details, which would allow individuals to bring offending content to its notice. The Court was informed that Agency would be functional within a week.

On the second issue, the petitioner’s counsel argued that removal of content must not be limited only to paid or commercial advertisements, but also other results that induce or otherwise lead couples to opt for sex selective abortions. This was opposed by Google and Yahoo! who contended that organic search results must not be tampered with, as the law only bans ‘advertisements’. Google’s counsel averred that the legislation could never have intended to remove generic search results, which directly facilitate information and research. On the other hand, the Solicitor General argued that that the word ‘advertisement’ should be interpreted keeping the object of the legislation in mind – that is, to prevent sex-selective abortions. On behalf of Microsoft, it was argued that even if the broadest definition of ‘advertisement’ was adopted, what has to be seen is the animus – whether its objective is to solicit sex selective abortions, before content could be removed.

On the third issue, the counsel for the petitioner argued that search engines should automatically remove offending content – advertisements or otherwise, even in the absence of a court order or directions from the Nodal Agency. It was his contention that is was not feasible to keep providing search engines with updated keywords and/or results and the latter should employ technical means to automatically block content. This was also echoed by the Court. On behalf of all search engines, it was pointed out that removal of content without an order from a court or the government was directly against the Supreme Court’s judgment in Shreya Singhal v. Union of India. In this case, the Court had read down Section 79 of the Information Technology Act 2000 (‘IT Act’) to hold that intermediaries are only required to take down content pursuant to court orders or government directives. The Court seemed to suggest  that Shreya Singhal was decided in the context of a criminal offence (Section 66A of the IT Act) and is distinguishable on that ground.

Additionally, it was also pointed out that even if the respondents were to remove content on their own, the lack of clarity over what constitutes as an ‘advertisement’ prevents them from deciding what content to remove. Overbroad removal of content might open them up to more litigation from authors and researchers with informative works on the subject. The Court did not offer any interpretation of its own, except to say that the ‘letter and spirit’ of the law must be followed. The lack of clarity on what is deemed illegal could, as pointed out by several counsels, lead to censorship of legitimate information.

Despite these concerns, in its order today, the Court has directed every search engine to form an in-house expert committee that will, based “on its own understanding” delete content that is violative of Section 22 of the PCPNDT Act. In case of any conflict, these committees should approach the Nodal Agency for clarification and the latter’s response is meant to guide the search engines’ final decision. The case has been adjourned to April, when the Court will see if the mechanism in place has been effective in resolving the petitioner’s grievances.

Facebook – Intermediary or Editor?

 

This post discusses the regulatory challenges that emerge from the changing nature of Facebook and other social networking websites

Facebook has recently faced a lot of criticism for circulating fake news and for knowingly suppressing user opinions during the 2016 U.S. elections. The social media website has also been criticised for over-censoring content on the basis of its community standards. In light of these issues, this post discusses whether Facebook can be considered a mere host or transmitter of user-generated content anymore. This post also seeks to highlight the new regulatory challenges that emerge from the changing nature of Facebook’s role.

The Changing Nature of Facebook’s Role

Social media websites such as Facebook and Twitter, Internet Service Providers, search engines, e-commerce websites etc., are all currently regulated as “intermediaries” under Section 79 of the Information Technology Act, 2000 (“IT Act”). An intermediarywith respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record.” Accordingly, they are not liable for user-generated content or communication as long as they observe due diligence and comply with certain conditions such as acting promptly on takedown orders issued by the appropriate government or its agency.

 Use of Human Editors

While Facebook is currently regarded as an intermediary, some argue that Facebook has ceased to be a mere host of user-generated content and has acquired a unique character as a platform. This argument was bolstered when Facebook’s editorial guidelines were leaked in May, 2016. The editorial guidelines demonstrated that the apprehensions that Facebook was acting in an editorial capacity were true for at least some aspects of the platform, such as the trending topics. Reports suggest that Facebook used human editors to “inject” or “blacklist” stories in the trending topics list. The social media website did not simply rely on algorithms to generate the trending topics. Instead, it instructed human editors to monitor traditional news media and determine what should be trending topics.

These editorial guidelines revealed that the editors at Facebook regularly reviewed algorithmically generated topics and added background information such as video or summaries to them, before publishing them as trending topics. Further the social media website also relied heavily on traditional news media websites to make such assessments. Critics have pointed out that the editorial policy of Facebook is extremely inadequate as it does not incorporate guidelines relating to checking for accuracy, encouraging media diversity, respecting privacy and the law, or editorial independence.

Months after this revelation, Facebook eliminated human editors from its trending platform and began relying solely on algorithms to filter trending topics. However, this elimination has resulted in the new problem of circulation of fake news. This is especially alarming because increased access to the Internet has meant that a large number of people get their news from social media websites. A recent research report pointed out that nearly 66% of Facebook users in the U.S , get news from Facebook. Similarly, nearly 59% of Twitter users rely on the website for news. In light of this data, eliminating human discretion completely does not appear to be a sensible approach when it comes to filtering politically critical content, such as trending news.

Private Censorship

Facebook has also been criticised widely for over-censoring content. The social media website blocks accounts and takes down content that is in contravention to its “community standards”. These community standards prohibit hate speech, pornography or content that praises or supports terrorism, among others. In India, the social media website faced a lot of flak for censoring content and blocking users during the unrest that followed the death of Burhan Wani, a member of a Kashmiri militant organisation. Reports suggest that nearly 30 academics, activists and journalists from across the world were restricted from discussing or sharing information regarding the incident on Facebook.

Facebook’s community standards have also been criticised for lacking a nuanced approach to issues such as nudity and hate speech. The blocking of content by private entities on the basis of such “community standards” raises concerns of being too wide and the possible chilling effect that it can have on free speech. As highlighted before, Facebook’s unique position, where it determines what content qualifies as hate speech or praise of terrorism, allows it to throttle alternative voices and influence the online narrative on such issues. The power exercised by Facebook in such instances makes it difficult to identify it as only a host or transmitter of content generated by its users.

Conclusion

The discussion above demonstrates that while Facebook does not behave entirely like a conventional editor, it would be too simplistic to regard it as a host of user-generated content.

Facebook is a unique platform that enables content distribution, possesses intimate information about its users, and has the ability to design the space and conditions under which their users can engage with content. It has been argued that Facebook must be considered as a “social editor” which “exercises control not only over the selection and organisation of content, but also, and importantly, over the way we find, share and engage with that content.” Consequently, Facebook and other social media websites have been described as “privately controlled public spheres” i.e much like traditional media, they have become platforms which provide information and space for political deliberation.

However, if we agree that Facebook is more akin to a “privately controlled public sphere”, we must rethink the regulatory bucket under which we categorise the platform and the limits to its immunity from liability.

This post is written by Faiza Rahman.

NDTV INDIA BAN: A CASE OF REGULATORY OVERREACH AND INSIDIOUS CENSORSHIP?

In a highly contentious move, the Ministry of Information and Broadcasting (‘MIB’) issued an order banning the telecast of the Hindi news channel ‘NDTV India’ on 9th November, 2016. The MIB imposed this ‘token penalty’ on NDTV India following the recommendation of an Inter-Ministerial Committee (‘IMC’). The IMC had found the channel liable for revealing “strategically sensitive information” during the coverage of Pathankot terrorist attacks on 4th January, 2016. The ban has, however, been put on hold by the MIB after the Supreme Court agreed to hear a writ petition filed by NDTV India against the ban.

The order passed by the MIB raises some important legal issues regarding the freedom of speech and expression of the press. Since the news channels are constantly in the race for garnering Television Rating Points, they may sometimes overlook the letter of the law while covering sensitive incidents such as terrorist attacks. In such cases, regulation of the media becomes necessary. However, it is tricky to achieve an optimum balance between the various concerns at play here – the freedom of expression of the press and the people’s right to information, public interest and national security.

In this post, we discuss the background of the NDTV India case and the legal issues arising from it. We also analyze and highlight the effects of governmental regulation of the media and its impact on the freedom of speech and expression of the media.

NDTV Case – A Brief Background:

On January 29, 2016, the MIB had issued a show cause notice to NDTV India alleging that their coverage of the Pathankot military airbase attack had revealed vital information which could be used by terror operators to impede the counter-operations carried by the security forces. The notice also provided details regarding the alleged sensitive information revealed by NDTV India.

In its defence, the channel claimed that the coverage had been “balanced and responsible” and that it was committed to the highest levels of journalism. The channel also stated that the sensitive information allegedly revealed by the channel regarding critical defence assets and location of the terrorists was already available in the public domain at the time of reporting. It was also pointed out that other news channels which had reported on similar information had not been hauled up by the MIB.

However, the MIB, in its order dated January 2, 2016, held that NDTV India’s coverage contravened Rule 6(1)(p) of the Programme and Advertising Code (the ‘Programme Code’ or ‘Code’) issued under the Cable TV Network Rules, 1994 (‘Cable TV Rules’). In exercise of its powers under the Cable TV Networks (Regulation) Act, 1995 (‘Cable TV Act’) and the Guidelines for Uplinking of Television Channels from India, 2011, the MIB imposed a ‘token penalty’ of a day’s ban on the broadcast of the channel.

Rule 6(1)(p) of the Programme Code:

Rule 6 of the Code sets out the restrictions on the content of programmes and advertisements that can be broadcasted on cable TV. Rule 6(1)(p) and (q) were added recently. Rule 6(1)(p) was introduced after concerns were expressed regarding the real-time coverage of sensitive incidents like the Mumbai and Gurdaspur terror attacks by Indian media. It seeks to prevent disclosure of sensitive information during such live coverage that could act as possible information sources for terror operators.

Rule 6(1)(p) states that: “No programme should be carried in the cable service which contains live coverage of any anti-terrorist operation by security forces, wherein media coverage shall be restricted to periodic briefing by an officer designated by the appropriate Government, till such operation concludes.

Explanation: For the purposes of this clause, it is clarified that “anti-terrorist operation” means such operation undertaken to bring terrorists to justice, which includes all engagements involving justifiable use of force between security forces and terrorists.”

Rule 6(1)(p), though necessary to regulate overzealous media coverage especially during incidents like terrorist attacks, is vague and ambiguous in its phrasing. The term ‘live coverage’ has not been defined in the Cable TV Rules, which makes it difficult to assess its precise meaning and scope. It is unclear whether ‘live coverage’ means only live video feed of the operations or whether live updates through media reporting without visuals will also be considered ‘live coverage’.

Further, the explanation to Rule 6(1)(p) also leaves a lot of room for subjective interpretation. It is unclear whether the expression “to bring terrorists to justice” implies the counter operations should result in fatalities of the terrorists or if the intention is to include the coverage of the trial and conviction of the terrorists, if they were caught alive. If so, it would be highly impractical to bar such coverage under Rule 6(1)(p). The inherent vagueness of this provision gives wide discretion to the governmental authorities to decide whether channels have violated the provisions of the Code.

In this context, it is important to highlight that the Supreme Court had struck down Section 66A of the Information and Technology Act, 2000 in the case of Shreya Singhal vs. Union of India, on the ground of being vague and overboard. The Court had held that the vague and imprecise nature of the provision had a chilling effect on the freedom of speech and expression. Following from this, it will be interesting to see the stand of the Supreme Court when it tests the constitutionality of Rule 6(1)(p) in light of the strict standards laid down in Shreya Singhal and a spate of other judgments.

Freedom of Speech under Article 19(1)(a)

The right of the media to report news is rooted in the fundamental right to free speech and expression guaranteed under Article 19(1)(a) of the Constitution of India. Every right has a corresponding duty, and accordingly, the right of the media to report news is accompanied by a duty to function responsibly while reporting information in the interest of the public. The freedom of the media is not absolute or unbridled, and reasonable restrictions can be placed on it under Article 19(2).

In the present case, it can be argued that Rule 6(1)(p) fails to pass the scrutiny of Article 19(2) due to inherent vagueness in the text of the provision. However, the Supreme Court may be reluctant to deem the provision unconstitutional. This reluctance was demonstrated for instance, when the challenge to the constitutionality of the Cinematograph Act, 1952 and its attendant guidelines, for containing vague restrictions in the context of certifying films, was dismissed by the Supreme Court. The Censor Board has used the wide discretion available to it for placing unreasonable restrictions while certifying films. If the Supreme Court continues to allow such restrictions on the freedom of speech and expression, the Programme Code is likely to survive judicial scrutiny.

Who should regulate?

Another important issue that the Supreme Court should decide in the present case is whether the MIB had the power to impose such a ban on NDTV India. Under the current regulatory regime, there are no statutory bodies governing media infractions. However, there are self-regulatory bodies like the News Broadcast Standards Authority (NBSA) and the Broadcasting Content Complaint’s Council (BCCC).The NBSA is an independent body set up by the News Broadcasters Association for regulating news and current affairs channels. The BCCC is a complaint redressal system established by the Indian Broadcasting Foundation for the non-news sector and is headed by retired judges of the Supreme Court and High Courts. Both the NBSA and the BCCC regularly look into complaints regarding violations of the Programme Code. These bodies are also authorized to issue advisories, condemn, levy penalties and direct channels to be taken off air if found in contravention of the Programme Code.

The decision of the MIB was predicated on the recommendation made by IMC which comprises solely of government officials with no journalistic or legal background. The MIB should have considered referring the matter to a regulatory body with domain expertise like the NBSA that addresses such matters on a regular basis or at least should have sought their opinion before arriving at its decision.

Way Forward

Freedom of expression of the press and the impartial and fair scrutiny of government actions and policies is imperative for a healthy democracy. Carte blanche powers with the government to regulate the media as stipulated by Cable TV Act without judicial or other oversight mechanisms pose a serious threat to free speech and the independence of the fourth estate.

The imposition of the ban against NDTV India by the MIB under vague and uncertain provisions can be argued as a case of regulatory overreach and insidious censorship. The perils of such executive intrusion on the freedom of the media will have a chilling effect on the freedom of speech. This can impact the vibrancy of the public discourse and the free flow of information and ideas which sustains a democracy. Although the governmental decision has been stayed, the Supreme Court should intervene and clarify the import of the vague terms used in the Programme Code to ensure that the freedom of the press is not compromised and fair and impartial news reporting is not stifled under the threat of executive action.

Google de-platforms Taliban Android app: Speech and Competition implications?

Written by Siddharth Manohar

About a few weeks ago, Google pulled an app from its online application marketplace the Google Play Store, which was developed by the Taliban for propagating violently extremist views and spreading hateful content. Google has stated that its reason for doing this is that the app violated its policy for Google Play Store.

Google maintains a comprehensive policy statement for any app developer who wishes to upload an app for public consumption on the Play Store. The policy, apart from setting up a policy for the Play Store as a marketplace, also places certain substantive conditions on developers using the platform to reach users.

Amongst other restrictions, one head reads ‘Hate Speech’. It says:

We don’t allow the promotion of hatred toward groups of people based on their race or ethnic origin, religion, disability, gender, age, veteran status, or sexual orientation/gender identity.

Google found the Taliban app to violate this stipulation in the Play Store policy, as confirmed by a Google spokesperson, who said that the policies are “designed to provide a great experience for users and developers. That’s why we remove apps from Google Play that violate those policies.” The app was first detected by an online intelligence group which claims to monitor extremist content on social media. It was developed to increase access to the Taliban’s online presence by presenting content in the Pashto language, which is widely spoken in the Afghan region.

The application itself of course still being available for download on a number of other regular websites, the content of its material led to its removal from a marketplace. This is an interesting application of the restriction of hateful speech, because the underlying principle in Google’s policy itself pays heed to the understanding that development and sale of apps forms a kind of free speech.

A potentially interesting debate in this area is the extent to which decisions on the contours of permissible speech can be decided by a private entity on its public platform. The age-old debate about the permissible restrictions on speech can find expression in this particular “marketplace of ideas” of Google Play Store. On one hand, there is the concern of protecting users from harmful and hateful content, speech that targets and vilifies individuals based on some factor of their identity, be it race, gender, caste, colour, or sexual orientation. On the other hand, there will also ever be the concern that the monitoring of speech by the overseeing authority becomes excessive and censors certain kinds of opinions and perspectives from entering the mainstream.

This particular situation provides an easy example in the form of an application developed by an expressly terrorist organisation. It would however still be useful to keep an eye out in the future for the kind of applications that are brought under the ambit of such policies, and the principles justifying these policies.

The question of what, if any, kind of control can be exercised over this kind of editorial power of Google over its marketplace is also a relevant one. Google can no doubt justify its editorial powers in relatively simple terms – it has explicit ownership of the entire platform and can the basis on which to allow developers onto it. However, the Play Store forms an overwhelmingly large percentage of how users access any application on a daily basis. Therefore, Google’s policies on the Play Store have a significant impact on how and whether applications are accessed by users in the context of the entire marketplace of applications and users. The policy implications of this are that the principles of Google’s Play Store policies need to be placed under the scrutiny of how it impacts the entire app development ecosystem. This is evidenced by the fact that the European Commission about a year ago pulled up Google for competition concerns regarding its Android operating system, and has also recently communicated its list of objections to Google. The variety of speech and competition concerns applicable to this context make it one to watch closely for developments of any kind for further analysis.

comin2getUlol

Image Source: ‘mammela’, Pixabay.