How (not) to get away with murder: Reviewing Facebook’s live streaming guidelines

Introduction

The recent shooting in Cleveland live streamed on Facebook has brought the social media company’s regulatory responsibilities into question. Since the launch of Facebook Live in 2016, the service’s role in raising political awareness has been acknowledged. However, the service has also been used to broadcast several instances of graphic violence.

The streaming of violent content (including instances of suicide, murders and gang rapes) has raised serious questions about Facebook’s responsibility as an intermediary. While it is not technically feasible for Facebook to review all live videos while they’re being streamed or filter them before they’re streamed, the platform does have a routine procedure in place to take down such content. This post will visit the guidelines in place to take down live streamed content and discuss alternatives to the existing reporting mechanism.

What guidelines are in place?

Facebook has ‘community standards’ in place.  However, their internal regulation methods are unknown to the public. Live videos have to be in compliance with ‘community standards’, which specifies that Facebook will remove content relating to ‘direct threats’, self-injury’, ‘dangerous organizations’, ‘bullying and harassment’, ‘attacks on public figures’, ‘criminal activity’ and ‘sexual violence and exploitation’.

The company has stated that it ‘only takes one report for something to be reviewed’.  This system of review has been criticized since graphic content could go unnoticed without a report. In addition, this form of reporting would be unsuccessful since there is no mandate of ‘compulsory reporting’ for the viewers.  Incidentally, the Cleveland shooting video was not detected by Facebook until it was flagged as ‘offensive’, which was a couple of hours after the incident. The company has also stated that they are working on developing ‘artificial intelligence’ that could help put an end to these broadcasts. However, they currently rely on the reporting mechanism, where ‘thousands of people around the world’ review posts that have been reported against. The reviewers check if the content goes against the ‘community standards’ and ‘prioritize videos with serious safety implications’.

While deciding if a video should be taken down, the reviewers will also take the ‘context and degree’ of the content into consideration. For instance, content that is aimed at ‘raising awareness’, even if it displays violence, will be allowed. However, content that is celebrating such violence would be taken down. To demonstrate, when a live video of civilian Philando Castile being shot by a police officer in Minnesota went viral, Facebook kept the video up on their platform, stating that it did not glorify the violent act.

 Regulation

Other than the internal guidelines by which Facebook regulates itself, there haven’t been instances of government regulators, like the United States’ Federal Communications Commission intervening. Unlike the realm of television, where the FCC regulates content and deems material ‘inappropriate’, social media websites are protected from content regulation.

This brings up the question of intermediary liability and Facebook’s liability for hosting graphic content. Under American Law, there is a distinction between ‘publishers’ and ‘common carriers’. A common carrier only ‘enables communications’ and does not ‘publish content’. If a platform edits content, it is most likely a publisher. A ‘publisher’ has a higher level of responsibility for content hosted on their platform, unlike a ‘carrier’. In most instances, social media companies are covered under Section 230 of the Communications Decency Act, a safe harbor provision, by which they would not be held liable for third-party content.  However, questions have been raised about Facebook’s role as a ‘publisher’ or ‘common carrier’, and there seems to be no conclusive answer.

Conclusion

Several experts have considered possible solutions to this growing problem. Some believe that such features should be limited to certain partners and should be opened up to the public once additional safeguards and better artificial intelligence technologies are in place. In these precarious situations, enforcing stricter laws on intermediaries might not resolve the issue at hand. Some jurisdictions have ‘mandatory reporting’ provisions, specifically for crimes of sexual assault. In India, under Section 19 of the Protection of Children from Sexual Offences Act, 2012 ‘any person who has apprehension that an offence…is likely to be committed or has knowledge that such an offence has been committed’ has to report such an offence. In the context of cyber-crimes, this system of ‘mandatory reporting’ would shift the onus on the viewers and supplement the existing reporting system. Mandatory provisions of this nature do not exist in the United States where most of the larger social media companies are based.

Similarly, possible solutions should focus on strengthening the existing reporting system, rather than holding social media platforms liable.

The Supreme Court Hears Sabu Mathew George v. Union of India – Another Blow for Intermediary Liability

The Supreme Court heard arguments in Sabu Mathew George v. Union of India today. This writ petition was filed in 2008, with the intention of banning ‘advertisement’ offering sex selective abortions and related services, from search engine results. According to the petitioner, these advertisements violate Section 22 of the Pre-Conception and Pre-Natal Diagnostic Techniques (Regulation and Prevention of Misuse Act), 1994 (‘PCPNDT Act’) and consequently, must be taken down.

A comprehensive round up of the issues involved and the Court’s various interim orders can be found here. Today’s hearing focused mainly on three issues – the setting up of the Nodal Agency that is entrusted with providing details of websites to be blocked by search engines, the ambit and scope of the word ‘advertisement’ under the PCPNDT Act and thirdly, the obligation of search engines to find offending content and delete it on their own, without a government directive or judicial order to that effect.

Appearing for the Central Government, the Solicitor General informed the Court that as per its directions, a Nodal Agency has now been constituted. An affidavit filed by the Centre provided details regarding the agency, including contact details, which would allow individuals to bring offending content to its notice. The Court was informed that Agency would be functional within a week.

On the second issue, the petitioner’s counsel argued that removal of content must not be limited only to paid or commercial advertisements, but also other results that induce or otherwise lead couples to opt for sex selective abortions. This was opposed by Google and Yahoo! who contended that organic search results must not be tampered with, as the law only bans ‘advertisements’. Google’s counsel averred that the legislation could never have intended to remove generic search results, which directly facilitate information and research. On the other hand, the Solicitor General argued that that the word ‘advertisement’ should be interpreted keeping the object of the legislation in mind – that is, to prevent sex-selective abortions. On behalf of Microsoft, it was argued that even if the broadest definition of ‘advertisement’ was adopted, what has to be seen is the animus – whether its objective is to solicit sex selective abortions, before content could be removed.

On the third issue, the counsel for the petitioner argued that search engines should automatically remove offending content – advertisements or otherwise, even in the absence of a court order or directions from the Nodal Agency. It was his contention that is was not feasible to keep providing search engines with updated keywords and/or results and the latter should employ technical means to automatically block content. This was also echoed by the Court. On behalf of all search engines, it was pointed out that removal of content without an order from a court or the government was directly against the Supreme Court’s judgment in Shreya Singhal v. Union of India. In this case, the Court had read down Section 79 of the Information Technology Act 2000 (‘IT Act’) to hold that intermediaries are only required to take down content pursuant to court orders or government directives. The Court seemed to suggest  that Shreya Singhal was decided in the context of a criminal offence (Section 66A of the IT Act) and is distinguishable on that ground.

Additionally, it was also pointed out that even if the respondents were to remove content on their own, the lack of clarity over what constitutes as an ‘advertisement’ prevents them from deciding what content to remove. Overbroad removal of content might open them up to more litigation from authors and researchers with informative works on the subject. The Court did not offer any interpretation of its own, except to say that the ‘letter and spirit’ of the law must be followed. The lack of clarity on what is deemed illegal could, as pointed out by several counsels, lead to censorship of legitimate information.

Despite these concerns, in its order today, the Court has directed every search engine to form an in-house expert committee that will, based “on its own understanding” delete content that is violative of Section 22 of the PCPNDT Act. In case of any conflict, these committees should approach the Nodal Agency for clarification and the latter’s response is meant to guide the search engines’ final decision. The case has been adjourned to April, when the Court will see if the mechanism in place has been effective in resolving the petitioner’s grievances.

Roundup of Sabu Mathew George vs. Union of India: Intermediary liability and the ‘doctrine of auto-block’

Introduction

In 2008, Sabu Matthew George, an activist, filed a writ petition to ban ‘advertisements’ relating to pre-natal sex determination from search engines in India. According to the petitioner, the display of these results violated Section 22 of the Pre-Natal Diagnostic Techniques (Regulation and Prevention of Misuse Act), 1994. From 2014-2015, the Supreme Court ordered the respondents to block these advertisements several times. Finally, on November 16, 2016, the Supreme Court ordered the respondents, Google, Microsoft and Yahoo to ‘auto-block’ advertisements relating to sex selective determination. They also ordered the creation of a ‘nodal agency’ that would provide search engines with the details of websites to block. The next hearing for this case is scheduled for February 16, 2017.

The judgment has been criticised for over-breadth and the censorship of legitimate content. We discuss some issues with the judgment below.

Are search engines ‘conduits’ or ‘content-providers’?

An earlier order in this case, dated December 4, 2012, states that the respondents argued that they “provided a corridor and did not have any control” over the information hosted on other websites.

There is often confusion surrounding the characterization of search engines as either ‘conduits’ or ‘content-providers’. A conduit is a ‘corridor’ for information, otherwise known as an intermediary. A content provider however, produces/alters the displayed content. It has been suggested by authors like Frank Pasquale that search engines (Google specifically) take advantage of this grey area by portraying themselves as conduits or content-providers, to avoid liability. For instance, Google will likely portray itself as a content-provider when it needs to claim First Amendment protection in the United States, and as a conduit for information when it needs to defend itself against First Amendment attacks. When concerns related to privacy arise, search engines attempt to claim editorial rights and freedom of expression. Conflictingly, when intellectual property matters or defamation claims arise, they portray themselves as ‘passive conduits’.

In the Indian context, there has been similar dissonance about the characterization of search engines. In the aftermath of the Sabu Mathew George judgment, the nature of search engines was debated by a few. One commentator has pointed out that the judgment would contradict the Supreme Court’s decision reading down Section 79(3)(b) of the Information Technology Act, 2008 (IT Act) in Shreya Singhal vs. Union of India, where the liability of intermediaries was restricted. Therefore, the commentator characterized search engines as passive conduits/intermediaries. According to the commentator, the Sabu Mathew George judgment would effectively hold intermediaries liable for content hosted unbeknownst to them. Another commentator has criticised this argument, stating that if Google willingly publishes advertisements through its AdWords system, then it is a publisher and not merely an intermediary. This portrays Google as a content-provider.

Sabu Mathew George defies existing legal standards 

As mentioned above, the Sabu Mathew George judgment contradicts the Supreme Court’s decision in Shreya Singhal, where the liability of intermediaries was read down under Section 79 (3) (b) of the IT Act. The Court in Shreya Singhal held that intermediaries would only be compelled to takedown content through court orders/government notifications. However, in the present case, the Supreme Court has repeatedly ordered the respondents to devise ways to monitor and censor their own content and even resort to ‘auto-blocking’ results.

The order dated November 16, 2016 also contradicts the Blocking Rules under the Information Technology Act, 2008. In the order, the Supreme Court directed the Center to create a ‘nodal agency’ which would allow people to register complaints against websites violating Section 22 of the PNDT Act. These complaints would then be passed on the concerned search engine in the manner described below-

Once it is brought to the notice of the Nodal Agency, it shall intimate the concerned search engine or the corridor provider immediately and after receipt of the same, the search engines are obliged to delete it within thirty-six hours and intimate the Nodal Agency.”

The functioning of this nodal agency would circumvent the Information Technology Act Blocking Rules. Under the Blocking Rules, the Committee for Examination of Requests reviews each blocking request and verifies whether it is in line with Section 69 of the IT Act. The Sabu Mathew George order does not prescribe a similar review system. While the author acknowledges that the nodal agency’s blocking rules are not a statutory mandate, its actions could still lead to over-blocking.

Organic search results’ and ‘sponsored links

One important distinction in this case is between ‘organic search results’ and ‘sponsored links’. A submission by MeitY (DeitY) explaining the difference between the two was not addressed by the Supreme Court in the order dated December 4, 2014.

Section 22 of the PNDT Act criminalizes the display of ‘advertisements’, but does not offer a precise definition for the term. The respondents argued that ‘advertisement’ would relate to ‘sponsored links’ and not ‘organic search results’. As per the order dated September 19, 2016, Google and Microsoft agreed to remove ‘advertisements’ and stated that search results should not be contemplated under Section 22 since they are not ‘commercial communication’. However, on November 16, 2016, the Supreme Court stated that the block would extend to both ‘sponsored links’ and ‘organic search results’.  The respondents expressed concern against this rationale stating that legitimate information on pre-natal sex determination would be unavailable, and that the ‘freedom of access to information’ would be restricted. The Court stated that this freedom could be curbed for the sake of the larger good.

The ‘doctrine of auto-block’

By the order dated September 19, 2016, the Court discussed the ‘doctrine of auto block’ and the responsibility of the respondents to block illegal content themselves. In this order, the Court listed roughly 40 search terms and stated that the respondents should ensure that any attempt at looking up these terms would be ‘auto-blocked’. The respondents also agreed to disable the ‘auto complete’ feature for these terms.

Google has blocked search terms from their auto-complete system in several other countries, often with little success. This article points out that illegal search terms relating to child pornography have been allowed on auto-complete while more innocuous terms like ‘homosexual’ have been blocked by Bing, proving that this system of blocking has several discrepancies.

Other than a chilling effect on free speech, disabling auto complete can also lead to other adverse effects. In one instance, the owner of a sex-toy store complained about her business not benefitting from the autocomplete feature, like several others had. She stated that …Google is … making it easier for people to find really specific information related to a search term. In a sense it’s like we’re not getting the same kind of courtesy of that functionality. Similarly, several legitimate websites discussing pre-natal sex determination might lose potential readers or viewers if ‘autocomplete’ is disabled.

Conclusion

The author would like to make two broad suggestions. First, the functioning of the nodal agency should be revisited. The recommended system lacks accountability and transparency and will certainly lead to over-blocking and will also lead to a chilling effect.

Second, search engines should not be given over-arching powers to censor their own websites. It is well-established that this leads to over-censorship. In addition to contradicting Section 79(3)(b) of the IT Act, the Court would also be delegating judicial authority to a private search engine.

According to a study conducted by The Centre for Internet & Society, Bangalore in January, 2015, searching for keywords relating to pre-natal sex determination on Google, Yahoo and Bing did not yield a large number of ‘organic search results’ and ‘sponsored links’ that would violate Section 22 of the PNDT Act. From 2015-2016, search engines have presumably followed Supreme Court orders and filtered out illegal search results and advertisements. Since instances of illegal search results and advertisements being displayed were not rampant to begin with,  there seems to be no urgent need to impose strict measures like ‘auto-blocks’.

The Supreme Court seems to be imposing similarly arbitrary rules upon search engines in other judgments. Recently, the Court ordered Google, Microsoft and Yahoo to create a ‘firewall’ that would prevent illegal videos from being uploaded to the internet.  They cited the example of China creating a similar firewall to prove the feasibility of the order.

Delhi HC hears the the Right to be Forgotten Case

The pending right to be forgotten petition came up for hearing before the Delhi High Court today. The case seeks the deletion of a court order, which has been reproduced on the website Indiankanoon.com, on the ground that it violates the petitioners’ right to privacy and reputation. This post looks at some of the contentions raised before the Court today and its response to them. However, these are mere observations and the Court is yet to take a final decision regarding the petitioner’s prayer(s).

During the course of today’s hearing, the presiding judge observed that all orders of the court constitute public records and cannot be deleted. In any case, it was pointed out that judicial decisions are normally reported and accessible on the National Judicial Data Grid and their removal from a particular website would not serve the desired purpose. Moreover, the court thought that even if the petitioner’s relief was granted, removal of content from the Internet was a technical impossibility.

The Court however did acknowledge that certain information could be redacted from judicial orders in some cases. This is routinely done in cases related to rape or other sexual offences owing to the presence of a clear legal basis for such redaction. In the present case however, the Court appeared unconvinced that a similar legal basis existed for redacting information. The petitioner’s counsel contended that personal information might become obsolete or irrelevant in certain cases, reflecting only half-truths and causing prejudice to an individual’s reputation and privacy. However, the Court observed that orders of a court could not become obsolete, and the balance if any would always tilt towards the public interest in transparency.

On several occasions, the petitioner’s counsel made a reference to the European Court of Justice’s decision in Google Spain, which is commonly credited with creating the right to be forgotten in Europe. However, the Google Spain ruling created a distinction between deleting information from its source and merely delisting it from search engine results. Further, the delisting is limited to results displayed for search performed for a particular name, ensuring that the information continues to be indexed and displayed if Internet users perform a generic search. However, no distinction was made between delisting and erasure during the course of arguments in the present case.

As an alternate prayer, it was argued for the petitioner that his name be anonymised from the court order in question. Here again, the Court felt that there was no legal basis for anonymisation in the present case. In the Court’s opinion, the information in the order was not prejudicial to the petitioner, per se. The fact that information about a family dispute was accessible to the public at large was not seen as particularly damaging.

The Indian legal framework lacks a coherent policy for anonymisation of names in judicial decisions. Under the Indian Penal Code, publishing names of victims of certain offences is prohibited. Realising that the provision did not bar courts from publishing the names of the victim, the Supreme Court held that names should be anonymised from judgments too, keeping the object of the law in mind. However, research indicates that names continue to be published by courts in a substantial number of cases. A few other laws also provide a legal basis for anonymisation, but these are limited to cases such as minor victims of sexual offences or juvenile offenders. On a few occasions, courts have used their inherent powers to order anonymisation of party names in family cases – making the decision dependent on the discretion of a judge, rather than a result of a larger policy objective. Increasing digitization of court records and easy availability of judgments on the Internet has new implications for online privacy. Transparency of the judicial process is crucial, but in the absence of any larger public interest, anonymisation may be warranted in a wider range of cases than is currently permitted.

As a concept, some form of the right to be forgotten may be essential in today’s age. However, it’s successful implementation is entirely dependent on clear legal principles that strike a balance between competing rights. In the absence of a comprehensive data protection legislation, this is difficult. However, besides the question of a right to be forgotten, this petition presents an interesting opportunity for the Court to analyse and perhaps frame guidelines where anonymisation may be adequate to protect privacy, without delisting or deleting any content.

Delhi High Court Refuses to make Group Administrators Liable for Content posted by Other Members

In April 2016, two directives issued by two separate state governments in India made social media group administrators (‘administrators’) liable for content circulated by other members of the group. This came in the wake of a series of arrests in India for content posted on WhatsApp. This included arrests of administrators for content posted by other members. In our previous post, we argued that making administrators liable is not legal and severely undermines their right to freedom of speech and expression.

This question surrounding the liability of administrators for content posted by others recently came up before for consideration before High Court of Delhi. In a recent order, the Court recognised the problem of placing this burden on administrators.

In this case, damages for defamation were also sought from the administrator of a Telegram and a Google Group on which the allegedly defamatory statements were published. Recognising the inability of the administrator to influence content on the group, the Court found holding an administrator liable equivalent to holding the ‘manufacturer of the newsprint’ liable for the defamatory statements in the newspaper.

The Court reasoned that at the time of making the group, the administrators could not expect members to make defamatory statements. Further, the Court took into account the fact that the statements posted did not require the administrator’s approval. Consequently, the Court found no reason to hold the administrator responsible.

However, the contention of the petitioner that the administrator has the power to ‘add or remove people from the group/platform as well as to filter’ was not evaluated on merits, as it was not the pleaded case of the petitioner. The courts response to such arguments remains to be seen.

In the midst of increasing restrictions on social media groups and administrators, this order is a welcome step. It is imperative that Governments, law enforcement agencies and courts take note to ensure that freedom of expression of administrators and users of such platforms/groups is not undermined.

Facebook – Intermediary or Editor?

 

This post discusses the regulatory challenges that emerge from the changing nature of Facebook and other social networking websites

Facebook has recently faced a lot of criticism for circulating fake news and for knowingly suppressing user opinions during the 2016 U.S. elections. The social media website has also been criticised for over-censoring content on the basis of its community standards. In light of these issues, this post discusses whether Facebook can be considered a mere host or transmitter of user-generated content anymore. This post also seeks to highlight the new regulatory challenges that emerge from the changing nature of Facebook’s role.

The Changing Nature of Facebook’s Role

Social media websites such as Facebook and Twitter, Internet Service Providers, search engines, e-commerce websites etc., are all currently regulated as “intermediaries” under Section 79 of the Information Technology Act, 2000 (“IT Act”). An intermediarywith respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record.” Accordingly, they are not liable for user-generated content or communication as long as they observe due diligence and comply with certain conditions such as acting promptly on takedown orders issued by the appropriate government or its agency.

 Use of Human Editors

While Facebook is currently regarded as an intermediary, some argue that Facebook has ceased to be a mere host of user-generated content and has acquired a unique character as a platform. This argument was bolstered when Facebook’s editorial guidelines were leaked in May, 2016. The editorial guidelines demonstrated that the apprehensions that Facebook was acting in an editorial capacity were true for at least some aspects of the platform, such as the trending topics. Reports suggest that Facebook used human editors to “inject” or “blacklist” stories in the trending topics list. The social media website did not simply rely on algorithms to generate the trending topics. Instead, it instructed human editors to monitor traditional news media and determine what should be trending topics.

These editorial guidelines revealed that the editors at Facebook regularly reviewed algorithmically generated topics and added background information such as video or summaries to them, before publishing them as trending topics. Further the social media website also relied heavily on traditional news media websites to make such assessments. Critics have pointed out that the editorial policy of Facebook is extremely inadequate as it does not incorporate guidelines relating to checking for accuracy, encouraging media diversity, respecting privacy and the law, or editorial independence.

Months after this revelation, Facebook eliminated human editors from its trending platform and began relying solely on algorithms to filter trending topics. However, this elimination has resulted in the new problem of circulation of fake news. This is especially alarming because increased access to the Internet has meant that a large number of people get their news from social media websites. A recent research report pointed out that nearly 66% of Facebook users in the U.S , get news from Facebook. Similarly, nearly 59% of Twitter users rely on the website for news. In light of this data, eliminating human discretion completely does not appear to be a sensible approach when it comes to filtering politically critical content, such as trending news.

Private Censorship

Facebook has also been criticised widely for over-censoring content. The social media website blocks accounts and takes down content that is in contravention to its “community standards”. These community standards prohibit hate speech, pornography or content that praises or supports terrorism, among others. In India, the social media website faced a lot of flak for censoring content and blocking users during the unrest that followed the death of Burhan Wani, a member of a Kashmiri militant organisation. Reports suggest that nearly 30 academics, activists and journalists from across the world were restricted from discussing or sharing information regarding the incident on Facebook.

Facebook’s community standards have also been criticised for lacking a nuanced approach to issues such as nudity and hate speech. The blocking of content by private entities on the basis of such “community standards” raises concerns of being too wide and the possible chilling effect that it can have on free speech. As highlighted before, Facebook’s unique position, where it determines what content qualifies as hate speech or praise of terrorism, allows it to throttle alternative voices and influence the online narrative on such issues. The power exercised by Facebook in such instances makes it difficult to identify it as only a host or transmitter of content generated by its users.

Conclusion

The discussion above demonstrates that while Facebook does not behave entirely like a conventional editor, it would be too simplistic to regard it as a host of user-generated content.

Facebook is a unique platform that enables content distribution, possesses intimate information about its users, and has the ability to design the space and conditions under which their users can engage with content. It has been argued that Facebook must be considered as a “social editor” which “exercises control not only over the selection and organisation of content, but also, and importantly, over the way we find, share and engage with that content.” Consequently, Facebook and other social media websites have been described as “privately controlled public spheres” i.e much like traditional media, they have become platforms which provide information and space for political deliberation.

However, if we agree that Facebook is more akin to a “privately controlled public sphere”, we must rethink the regulatory bucket under which we categorise the platform and the limits to its immunity from liability.

This post is written by Faiza Rahman.

Parliamentary Standing Committee on a New Online Hate Speech Provision

Written by Nakul Nayak

(My thanks to Mr. Apar Gupta for providing this lead through his Twitter feed.)

Amidst the noise of the winter session of Parliament last month, a new proposal to regulate online communications was made. On December 7th, the Parliamentary Standing Committee on Home Affairs presented a status report (“Action Taken Report”) to the Rajya Sabha. This report was in the nature of a review of the actions taken by the Central Government on the recommendations and observations contained in another report presented to the Rajya Sabha in February, 2014 – the 176th Report on the Functioning of the Delhi Police (“176th Report”). In essence, these reports studied the prevalent law and order condition in Delhi and provided recommendations, legal and non-legal, for fighting crime.

One of the issues highlighted in the 176th Report was the manifest shortcomings in the Information Technology Act. The Report noted that the IT Act needed to be reviewed regularly. One particular suggestion given by the Delhi Police in this regard related to the lack of clarity in the definition of the erstwhile sec. 66A. The police suggested that “[s]everal generalized terms are being used in definition of section 66A of IT Act like annoyance, inconvenience, danger, obstruction, insult, hatred etc. Illustrative definition of each term should be provided in the Act with some explanation/illustration.[1] Note that this report was published in 2014, more than a year before the Supreme Court’s historic ruling in Shreya Singhal finding sec. 66A unconstitutional.

An important proposition of law that was laid down in Shreya Singhal was that any restriction of speech under Art. 19(2) must be medium-neutral. Thus, the contours of the doctrines prohibiting speech will be the same over the internet as any other medium. At the same time, the Court rejected an Art. 14 challenge to sec. 66A, thereby finding that there existed an intelligible differentia between the internet and other media. This has opened the doors for the legislature to make laws to tackle offences that are internet-specific, like say phishing.

The Action Taken Report notes that as a result of the striking down of sec. 66A, some online conduct has gone outside the purview of regulation. One such example the report cites is “spoofing”. Spoofing is the dissemination of communications on the internet with a concealed or forged identity. The Report goes on to provide a working definition for “spoofing” and proposes to criminalise it. If this proposal falls through, spoofing will be an instance of an internet-specific offence

Another example of unjustifiable online conduct that has been exonerated post-Singhal is hate speech. Hate speech laws is a broad head that includes all such legal regulations that proscribe discriminatory expression that is intended to spread hatred or has that effect. The Report states that all online hate speech must be covered under the IT Act through an exclusive provision. It has suggested that this provision be worded as follows

whoever, by means of a computer resource or a communication device sends or transmits any information ( as defined under 2 (1) (v) of IT Act )

  1. which promotes or attempts to promote, on the ground of religion, race, sex, place of birth, residence, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between 
religious, racial, linguistic or regional groups or caste, or communities, or
  2. which carries imputations that any class of persons cannot, by reason of their being members of any religious, racial, linguistic or regional group or caste or community bear true faith and allegiance to constitution of India, as by law 
established or uphold the sovereignty or integrity of India, or
  3. which counsels advices or propagates that any class of persons shall or should be by reason of their being members of any religious, racial, language or religion group or caste or community or gender be denied or [sic] deprived of their rights as 
citizens of India, or
  4. carries assertion, appeal, counsel, plea concerning obligation of any class of 
persons, by reasons of their being members of any religion, racial, language or religion group or caste or community or gender and such assertion, appeal, counsel or plea causes or is likely to cause disharmony or feeling of enmity or hatred or ill-will between such members or other persons.”

shall be punishable with ………”

A mere perusal of these provisions reveals that they are substantially similar to the offenses covered under sec. 153A and sec. 153B of the Indian Penal Code, which along with sec. 295A of the IPC form the backbone of penal regulations on hate speech. In this backdrop, it would appear that the proposed insertion to the IT Act is redundant. The Action Taken Report justifies the inclusion of this proposed provision on the ground that the impact caused by the “fast and wider spread of the online material … may be more severe and damaging. Thus, stricter penalties may be prescribed for the same as against similar sections mentioned in IPC.” However, if the rationale is to employ stricter penalties to online content, then the Report could very well have suggested amendments to sec. 153A and sec. 153B.

What is disconcerting, however, is the assumption that because incendiary content is posted online, its effect will be “more severe and damaging”. Indeed social media has had a hand in the spread of violence and fear in tense situations over the last few years, starting from the North East exodus to the Muzzafarnagar riots and up to as recently as Dadri lynching. Yet, the blanket assertion that online content is more damaging does not take into account many variables like

  • the influence of the speaker – A popular public figure with a large following can exercise much more influence on public behaviour in an offline medium than a common man can on social media,
  • the atmospheric differences between viewing online content in your house and listening to speech at a charged rally, or
  • the internal contradictions of online speech, like the influence exerted by a 140 character tweet vis-à-vis a communally sensitive video (note here that the Supreme Court itself has emphatically recognized the difference between motion picture and the written word in stirring emotion in KA Abbas).

The Report could perhaps benefit from a more nuanced understanding of hate speech. A well-recognized effort in that direction is Prof. Susan Benesch’s Dangerous Speech framework. Prof. Benesch has devised a five-point examination of incendiary speech on the basis of the speaker, the audience, the socio-historical context, the speech act, and the means of transmission. This characterises the effects of the alleged hate speech in a more organized manner, allowing for a more informed adjudication on the possible pernicious effect that said speech might have.

An interesting question of debate could well centre on the proposed enhanced penalty for online hate speech. Would greater penalty for online speech (as opposed to offline speech) attract the ire of the doctrinal stance of medium-neutrality of the Court? Note that the Court in Shreya Singhal only mentions that the standards of determining speech restriction must be medium-neutral. Yet, the premise of enhanced penalties is based on the greater speed and access of online speech, which is necessarily internet-specific. Will a Court’s adjudication of penalties for criminalized speech amount to a standard or not?

Retweeting akin to Fresh Publication?

The Report also suggests that any person who shares culpable online content “should also be liable for the offence”. This includes those who “innocently” forward such content. Thus, for instance, anyone who retweets an original tweet that is later criminalized, will also be found liable for the same offence, as if he originally uploaded the content. According to the Report, “[t]his would act as a deterrent in the viral spread of such content.

Forwarding of content, originally uploaded by one individual, is a popular feature in social media websites. Twitter’s version is called ‘Retweet’, while Facebook’s version is called ‘Share’. When a person X shares a person Y’s post, it may mean one of two things

  1. X endorses said opinion and expresses the same, through the mask of Y.
  2. X conveys to his followers the very fact that Y remarked said content. (In fact, many individuals provide a disclaimer on their Twitter profiles that Retweets do not necessarily meant endorsements.)

In an informative academic article, Nandan Kamath, a distinguished lawyer, termed people who forward information as “content sharers”, characterizing them as “a new breed of intermediaries”. Kamath goes on to liken content sharing to linked quotations and not as fresh publications. In doing so, he calls for restricted liabilities to content sharers. Kamath also examines the UK position on prosecution for social media content, which is multi-faceted, requiring “evidential sufficiency” and “public interest”.

The observations of the Action Taken Report appear linear in their stance of criminalizing all content sharing where the expression may be culpable. In doing so, it assumes all content sharing to amount to original speech. This approach turns a blind eye to instances where a sharer intends the post as a linked quotation. The Report would do well to take these concerns into account, thereby developing a more nuanced policy.

Nakul Nayak was a Fellow at the Centre for Communication Governance from 2015-16.

[1] Para 3.10.2