Today, the Centre for Communication Governance (CCG) is happy to release a working paper titled ‘Tackling the dissemination and redistribution of NCII’ (accessible here). The dissemination and redistribution of non-consensual intimate images (“NCII”) is an issue that has plagued platforms, courts, and lawmakers in recent years. The difficulty of restricting NCII is particularly acute on ‘rogue’ websites that are unresponsive to user complaints. In India, this has prompted victims to petition courts to block webpages hosting their NCII. However, even when courts do block these webpages, the same NCII content may be re-uploaded at different locations.
The goal of our proposed solution is to: (i) reduce the time, cost, and effort associated with victims having to go to court to have their NCII on ‘rogue’ websites blocked; (ii) ensure victims do not have to re-approach courts for the blocking of redistributed NCII; and (iii) provide administrative, legal, and social support to victims.
Our working paper proposes the creation of an independent body (“IB”) to: maintain a hash database of known NCII content; liaise with government departments to ensure the blocking of webpages hosting NCII; potentially crawl targeted areas of the web to detect known NCII content; and work with victims to increase the awareness of NCII related harms and provide administrative and legal support. Under our proposed solution, victims would be able to simply submit URLs hosting their NCII to a centralised portal maintained by the IB. The IB would then vet the victim’s complaint, coordinate with government departments to block the URL, and eventually hash and add the content to a database to combat redistribution.
This will significantly reduce the time, money, and effort exerted by victims to have their NCII blocked, whether at the stage of dissemination or redistribution. The issue of redistribution can also potentially be tackled through a targeted, proactive crawl of websites by the IB for known NCII pursuant to a risk impact assessment. Our solution envisages several safeguards to ensure that the database is only used for NCII, and that lawful content is not added to the database. Chief amongst these is the use of multiple human reviewers to vet the complaints made by victims and a public interest exemption where free speech and privacy interests may need to be balanced.
A full summary of our recommendations are as follows:
Efforts should be made towards setting up an independently maintained hash database for NCII content.
The hash database should be maintained by the IB, and it must undertake stringent vetting processes to ensure that only NCII content is added to the database.
Individuals and vetted technology platforms should be able to submit NCII content for inclusion into the database; NCII content removed pursuant to a court order can also be included in the database.
The IB may be provided with a mandate to proactively crawl the web in a targeted manner to detect copies of identified NCII content pursuant to a risk impact assessment. This will help shift the burden of identifying copies of known NCII away from victims.
The IB can supply the DoT with URLs hosting known NCII content, and work with victims to alleviate the burdens of locating and identifying repeat instances of NCII content.
The IB should be able to work with organisations to provide social, legal, and administrative support to victims of NCII; it would also be able to coordinate with law enforcement and regulatory agencies in facilitating the removal of NCII.
Our working paper draws on recent industry efforts to curb NCII, as well as the current multi-stakeholder approach used to combat child-sex abuse material online. However, our regulatory solution is specifically targeted at restricting the dissemination and redistribution of NCII on ‘rogue’ websites that are unresponsive to user complaints. We welcome inputs from all stakeholders as we work towards finalising our proposed solution. Please send comments and suggestions to <ccg@nludelhi.ac.in>.
Part I of this two part-series examined the contours of Rule 16 of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”), and the Bombay High Court’s rationale for refusing to stay the rule in the Leaflet case. This second part examines the legality and constitutionality of Rule 16. It argues that the rule’s constitutionality may be contested because it deprives impacted content publishers of a hearing when their content is restricted. It also argues that the MIB should provide information on blocking orders under Rule 16 to allow them to be challenged, both by users whose access to information is curtailed, and by publishers whose right to free expression is restricted.
Rule 16’s legality
At its core, Rule 16 is a legal provision granting discretionary authority to the government to take down content. Consistently, the Supreme Court (“SC”) has maintained that to be compliant with Article 14, discretionary authority must be backed by adequate safeguards.[1] Admittedly, Rule 16 is not entirely devoid of safeguards since it envisages an assessment of the credibility of content blocking recommendations at multiple levels (refer Part I for context). But this framework overlooks a core principle of natural justice – audi alteram partem (hear the other side) – by depriving the impacted publishers of a hearing.
In Tulsiram Patel, the SC recognised principles of natural justice as part of the guarantee under Article 14 and ruled that any law or state action abrogating these principles is susceptible to a constitutionality challenge. But the SC also found that natural justice principles are not absolute and can be curtailed under exceptional circumstances. Particularly, audi alteram partem, can be excluded in situations where the “promptitude or the urgency of taking action so demands”.
Arguably, the suspension of pre-decisional hearings under Rule 16 is justifiable considering the rule’s very purpose is to empower the Government to act with alacrity against content capable of causing immediate real-world harm. However, this rationale does not preclude the provision of a post-decisional hearing under the framework of the 2021 IT Rules. This is because, as posited by the SC in Maneka Gandhi (analysed here and here), the “audi alteram partem rule is sufficiently flexible” to address“the exigencies of myriad kinds of situations…”. Thus, a post-decisional hearing to impacted stakeholders, after the immediacy necessitating the issuance of interim blocking directions had subsided, could have been reasonably accommodated within Rule 16. Crucially, this would create a forum for the State to justify the necessity and proportionality of its speech restriction to the individuals’ impacted (strengthening legitimacy) and the public at large (strengthening the rule of law and public reasoning). Finally, in the case of ex-facie illegal content, originators are unlikely to avail of post-facto hearings, mitigating concerns of a burdensome procedure.
Rule 16’s exercise by MIB
Opacity
MIB has exercised its power under Rule 16 of the 2021 IT Rules on five occasions. Collectively, it has ordered the blocking of approximately 93 YouTube channels, 6 websites, 4 Twitter accounts, and 2 Facebook accounts. Each time, MIB has announced content blocking only through press releases after theorders were passed but has not disclosed the actual blocking orders.
MIB’s reluctance to publish its blocking orders renders the manner it is exercising power under Rule 16 opaque. Although press statements inform the public that content has been blocked, blocking orders are required (under Rule 16(2) and Rule 16(4)) to record the reasons for which the content has been blocked. As discussed above, this limits the right to free expression of the originators of the content and denies them the ability to be heard.
Additionally, content recipients, whose right to view content and access information is curtailed through such orders, are not being made aware of the existence of these orders by the Ministry directly. Pertinently, the 2021 IT Rules appear to recognise the importance of informing users about the reasons for blocking digital content. This is evidenced by Rule 4(4), which requires ‘significant social media intermediaries’ to display a notice to users attempting to access proactively disabled content. However, in the absence of similar transparency obligations upon MIB under the 2021 IT Rules, content recipients aggrieved by the Ministry’s blocking orders may be compelled to rely on the cumbersome mechanism under the Right to Information Act, 2005 to seek the disclosure of these orders to challenge them.
Although the 2021 IT Rules do not specifically mandate the publication of blocking orders by MIB, this obligation can be derived from the Anuradha Bhasin verdict. Here, in the context of the Telecom Suspension Rules, the SC held that any order affecting the “lives, liberty and property of people” must be published by the government, “regardless of whether the parent statute or rule prescribes the same”. The SC also held that the State should ensure the availability of governmental orders curtailing fundamental rights unless it claims specific privilege or public interest for refusing disclosure. Even then, courts will finally decide whether the State’s claims override the aggrieved litigants’ interests.
Considering the SC’s clear reasoning, MIB ought to make its blocking orders readily available in the interest of transparency, especially since a confidentiality provision restricting disclosure, akin to Rule 16 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“2009 Blocking Rules”), is absent in the 2021 IT Rules.
Overuse
Another concerning trend is MIB’s invocation of its emergency content-blocking power as the norm rather than the exception it was meant to be. For context, the 2021 IT Rules provide a non-emergency blocking process under Rules 14 and 15, whereunder impacted publishers are provided a pre-decisional hearing before an Inter-Departmental Committee required to be constituted under Rule 13(1)(b). However, thus far, MIB has exclusively reliedon its emergency power to block ostensibly problematic digital content, including fake news.
While the Bombay High Court in the Leaflet casedeclined to expressly stay Rule 14 (noting that the Inter-Departmental Committee was yet to be set up) (¶19), the High Court’s stay on Rule 9(3) creates a measure of ambiguity as to whether Rules 14 and 15 are currently in effect. This is because Rule 9(3) states that there shall be a government oversight mechanism to “ensure adherence to the Code of Ethics”. A key part of this mechanism is the Inter-Departmental Committee whose role is to decide “violation[s] or contravention[s] of the Code of Ethics” (Rule 14(2)). The High Court even notes that it is “incomprehensible” how content may be taken down under Rule 14(5) for violating the Code of Ethics (¶27). Thus, despite the Bombay High Court’s refusal to stay Rule 14, it is arguable that the High Court’s stay on the operation of Rule 9(3) to prevent the ‘Code of Ethics’ from being applied against online news and curated content publishers, may logically extend to Rule 14(2) and 15. However, even if the Union were to proceed on a plain reading of the Leaflet order and infer that the Bombay High Court did not stay Rules 14 and 15, it is unclear if the MIB has constituted the Inter-Departmental Committee to facilitate non-emergency blocking.
MeitY has also liberally invoked its emergency blocking power under Rule 9 of the 2009 Blocking Rules to disable access to content. Illustratively, in early 2021 Twitter received multiple blocking orders from MeitY, at least two of which were emergency orders, directing it to disable over 250 URLs and a thousand accounts for circulating content relating to farmers’ agitation against contentious farm laws. Commentators have also pointed out that there are almost no recorded instances of MeitY providing pre-decisional hearings to publishers under the 2009 Blocking Rules, indicating that in practice this crucial safeguard has been rendered illusory.
Conclusion
Evidently, there is a need for the MIB to be more transparent when invoking its emergency content-blocking powers. A significant step forward in this direction would be ensuring that at least final blocking orders, which ratify emergency blocking directions, are made readily available, or at least provided to publishers/originators. Similarly, notices to any users trying to access blocked content would also enhance transparency. Crucially, these measures would reduce information asymmetry regarding the existence of blocking orders and allow a larger section of stakeholders, including the oft-neglected content recipients, the opportunity to challenge such orders before constitutional courts.
Additionally, the absence of hearings to impacted stakeholders, at any stage of the emergency blocking process under Rule 16 of the 2021 IT Rules limits their right to be heard and defend the legality of ‘at-issue’ content. Whilst the justification of urgency may be sufficient to deny a pre-decisional hearing, the procedural safeguard of a post-decisional hearing should be incorporated by MIB.
The aforesaid legal infirmities plague Rule 9 of the 2009 Blocking Rules as well, given its similarity with Rule 16 of the 2021 IT Rules. The Tanul Thakur case presents an ideal opportunity for the Delhi High Court to examine and address the limitations of these rules. Civil society organisations have for years advocated (here and here) for incorporation of a post-decisional hearing within the emergency blocking framework under the 2009 Blocking Rules too. Its adoption and diligent implementation could go a long way in upholding natural justice and mitigating the risk of arbitrary content blocking.
[1]State of Punjab v. Khan Chand, (1974) 1 SCC 549; Virendra v. The State of Punjab & Ors., AIR 1957 SC 896; State of West Bengal v. Anwar Ali, AIR 1952 SC 75.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”) were challenged before several High Courts (refer here and here) almost immediately after their promulgation. In one such challenge, initiated by the publishers of the online news portal ‘The Leaflet’, the Bombay High Court, by an order dated August 14, 2021, imposed an interim stay on the operation of Rules 9(1) and (3) of the 2021 IT Rules. Chiefly, this was done because these provisions subject online news and curated content publishers to a vaguely worded ‘code of ethics’, adherence to which would have had a ‘chilling effect’ on their freedom of speech. However, the Bombay High Court refused to stay Rule 16 of these rules, which empowers the Ministry of Information and Broadcasting (“MIB”) to direct blocking of digital content during an “emergency” where “no delay is acceptable”.
Part I of this two-part series, examines the contours of Rule 16 and argues that the Bombay High Court overlooked the procedural inadequacy of this rule when refusing to stay the provision in the Leaflet case. Part II assesses the legality and constitutionality of the rule.
Overview of Rule 16
Part III of the 2021 IT Rules authorises the MIB to direct blocking of digital content in case of an ‘emergency’ in the following manner:
The MIB has correctly noted that Rule 16 is modelled after Rule 9 of the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“2009 Blocking Rules”) (analysed here), and confers upon the MIB similar emergency blocking powers which the Ministry of Electronics and Information Technology (“MeitY”) has possessed since 2009. Both provisions confer discretion upon authorised officers to determine what constitutes an emergency but fail to provide a hearing to impacted publishers or intermediaries at any stage.
Judicial findings on Rule 16
The Bombay High Court’s order in the Leaflet case is significant since it is the first time a constitutional court has recorded its preliminary findings on the rule’s legitimacy. Here, the Bombay High Court refused to stay Rule 16 primarily for two reasons. First, the High Court held that Rule 16 of the 2021 IT Rules is substantially similar to Rule 9 of the 2009 Blocking Rules, which is still in force. Second, the grounds upon which Rule 16 permits content blocking are coextensive with the grounds on which speech may be ‘reasonably restricted’ under Article 19(2) of the Indian Constitution. Respectfully, the plausibility of this reasoning is contestable:
Equivalence with the 2009 Blocking Rules: Section 69A of the IT Act and the 2009 Blocking Rules were previously challenged in Shreya Singhal, where both were upheld by the Supreme Court (“SC”). However, establishing an equivalence between Rule 16 of the 2021 IT Rules and Rule 9 of the 2009 Blocking Rules to understand the constitutionality of the former would have been useful only if Shreya Singhal contained a meaningful analysis of Rule 9. However, the SC did not examine this rule but rather broadly upheld the constitutionality of the 2009 Blocking Rules as a whole due to the presence of certain safeguards including: (a) the non-emergency process for content blocking under the 2009 Blocking Rules includes a pre-decisional hearing to identified intermediaries/originators before content was blocked; and (b) the 2009 Blocking Rules mandate the recording of reasons in blocking orders so that they may be challenged under Article 226 of the Constitution
However, the SC did not consider that the emergency blocking framework under Rule 9 of the 2009 Blocking Rules not only allows MeitY to bypass the essential safeguard of a pre-decisional hearing to impacted stakeholders but also fails to provide them with either a written order or a post-decisional hearing. It also did not address that Rule 16 of the 2009 Blocking Rules, which mandates confidentiality of blocking requests and subsequent actions, empowers MeitY to refuse disclosure of blocking orders to impacted stakeholders thus depriving them of the opportunity to challenge such orders.
Thus, the Bombay High Court’s attempt in the Leaflet case to claim equivalence with Rule 9 of the 2009 Blocking Rules as a basis to defend the constitutionality of Rule 16 of the 2021 IT Rules was inapposite since Rule 9 itself was not substantively reviewed in Shreya Singhal, and its operation has since been challenged on constitutional grounds.
Procedural safeguards: Merely because Rule 16 of the 2021 IT Rules permits content blocking only under the circumstances enumerated under Article 19(2), does not automatically render it procedurally reasonable. In People’s Union of Civil Liberties (“PUCL”) the SC examined the procedural propriety of Section 5(2) of the Telegraph Act, 1885, which permits phone-tapping. Even though this provision restricts fundamental rights only on constitutionally permissible grounds, the SC found that substantive law had to be backed by adequate procedural safeguards to rule out arbitrariness. Although the SC declined to strike down Section 5(2) in PUCL, it framed interim guidelines to govern the provision’s exercise to compensate for the lack of adequate safeguards.
Since Rule 16 restricts the freedom of speech, its proportionality should be tested as part of any meaningful constitutionality analysis. To be proportionate, restrictions on fundamental rights must satisfy four prongs[1]: (a) legality – the requirement of a law having a legitimate aim; (b) suitability – a rational nexus between the means adopted to restrict rights and the end of achieving this aim, (c) necessity – proposed restrictions must be the ‘least restrictive measures’ for achieving the aim; and (d) balancing – balance between the extent to which rights are restricted and the need to achieve the aim. Justice Kaul’s opinion in Puttaswamy (9JB)also highlights the need for procedural safeguards against the abuse of measures interfering with fundamental rights (para 70 Kaul J).
Arguably, by demonstrating the connection between Rule 16 and Article 19(2), the Bombay High Court has proven that Rule 16 potentially satisfies the ‘legality’ prong. However, even at an interim stage, before finally ascertaining Rule 16’s constitutionality by testing it against the other proportionality parameters identified above, the Bombay High Court should have considered whether the absence of procedural safeguards under this rule merited staying its operation.
For these reasons, the Bombay High Court could have ruled differently in deciding whether to stay the operation of Rule 16 in the Leaflet case. While these are important considerations at the interim stage, ultimately the larger question of constitutionality must be addressed. The second post in this series will critically examines the legality and constitutionality of Rule 16.
[1]Modern Dental College and Research Centre and Ors. v. State of Madhya Pradesh and Ors., (2016) 7 SCC 353; Justice K.S. Puttaswamy & Ors. v. Union of India (UOI) & Ors., (2019) 1 SCC 1; Anuradha Bhasin and Ors. v. Union of India (UOI) & Ors., (2020) 3 SCC 637.
On 25 February 2021, the Central Government notified the Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021 (‘2021 Rules’). These Rules have been the subject of much controversy as social media intermediaries and media houses have challenged them in various High Courts across the country. The Bombay High Court in AGIJ Promotion of Nineteenonea Media v Union of Indiastayed the operation of Rule 9(1) and Rule 9(3), the former provision mandating adherence to the ‘Code of Ethics’ and the latter creating a three-tiered structure to regulate online curated content. The High Court held that these rules contravened Article 19(1)(a) of the Constitution and transgressed the rule-making power delegated by the Information Technology Act, 2000 (‘IT Act’). This was affirmed by the Madras High Court in Digital News Publishers Association v Union of India,which noted that the order passed by the Bombay High Court had a pan-India effect.
While the Information Technology (Intermediary Guidelines), 2011 applied solely to intermediaries, the 2021 Rules cover both intermediaries and publishers of digital content, including OTT platforms (that fall under ‘publisher of online curated content). At the outset, the departure from utilising existing legislations such as the Cinematograph Act, 1952, or the Cable Television Networks (Regulation) Act, 1955, and invoking the IT Act to regulate publishers of film and television is curious. The aforementioned Bombay High Court judgement addressed this, observing that fields which stood occupied by independent legislations could not possibly be brought within the purview of the 2021 Rules.
The regulation of OTT platforms assumes particular significance given the recent controversies concerning web series that allegedly contain objectionable content or offend religious beliefs. For instance, FIRs were lodged against the makers of the web series Tandav, which led to Amazon Prime Video’s India head moving the Supreme Court for protection against arrest. Similarly, Netflix’s A Suitable Boy also triggered a police case after a political leader found the scene wherein the protagonist kissed a Muslim boy at a Hindu temple objectionable. FIRs have also been registered against the makers and producers of Mirzapur for offending religious beliefs, and a petition has been filed before the Supreme Court for portraying the Uttar Pradesh district in a negative manner.
This blog will first set out how the 2021 Rules are applicable to OTT platforms. Second, it will examine whether the regulatory mechanisms conceived by the 2021 Rules provide unduly broad censorial powers to the Central Government, potentially threatening free speech and expression guaranteed by the Indian Constitution.
The 2021 Rules and OTT Platforms In February 2019, the Ministry of Electronics and Information Technology (‘MeitY’) told the Delhi High Court that the IT Act already provided stringent provisions for website blocking (under Section 69A) in case of illegal content on OTT Platforms and therefore, no mandamus could be issued to the Centre for framing general guidelines or separate provisions for OTT content. However, in February 2021, amidst rising controversies revolving around various shows, the Centre notified the 2021 Rules, Part III of which is titled “Code of Ethics and Procedure and Safeguard in Relation to Digital/Online Media”.
Rule 2(u) of the 2021 Rules defines “publisher of online curated content” as any publisher who makes available to users, on demand, audio-visual content (that is owned or licensed by the publisher) via a computer resource over the internet. OTT platforms such as Netflix, Amazon Prime Video, and Disney+Hotstar squarely fall within the ambit of such ‘publishers of online curated content’. Under Rule 8(2) of the 2021 Rules, such publishers are bound by Part III of the 2021 Rules, while Rule 9 requires such publishers to adhere to the ‘Code of Ethics’ found in the Appendix to the 2021 Rules. This Code lays down five broad principles, ranging from age classification of content to exercising due caution and discretion while depicting India’s multi-cultural background.
Perhaps the most salient feature of Part III is its three-tier structure for redressal of grievances against content, which is applicable to both publishers of news and current affairs and publishers of online curated content. Any complaints that a publisher’s content violates the Code of Ethics or that the publisher is in breach of any rule in Part III of the 2021 Rules are addressed through the following structure:
Beyond the 2021 Rules, there will also be an establishment of an “Online Grievance Portal” by the Ministry of Information & Broadcasting (‘MIB’) where any person who objects to the content of a publisher can register their grievance. This grievance will be electronically directed to the publisher, the Ministry, as well as the self-regulating body.
The impact of the 2021 Rules Films released in theatres in India are subjected to pre-certification from the Central Board of Film Certification (‘CBFC’) as per the Cinematograph Act, 1952, and television programmes are governed as per the Cable Television Network (Regulation) Act, 1995. However, OTT platforms, till now, escaped the scrutiny of the law due to an absence of clarity as to which Ministry would regulate them, i.e., the MietY or the MIB. The matter was resolved in November 2020 when the Government of India (Allocation of Business) Rules, 1961 were amended to include “Films and Audio-Visual programmes made available by online content providers” within the ambit of the MIB.
Overregulation and independent regulatory bodies The 2021 Rules pose a danger of overregulation vis-a-vis OTT platforms; they promote self-censorship and potentially increase government oversight over digital content. Beginning with the second-tier of the mechanism established by the 2021 Rules, it requires a self-regulatory body to be set up which is to be headed by a Supreme Court or High Court Judge, or an independent eminent person from the field of media, broadcasting, entertainment, child rights, human rights or such other field; the members of this body, not exceeding six, are experts from various fields. Rule 12(3) dictates that the self-regulating body, after constitution, needs to register itself with the MIB. However, this registration is predicated upon the subjective satisfaction of the MIB that the body has been constituted according to Rule 12(2) and has agreed to perform functions laid down in sub-rules (4) and (5), which effectively hinders the independence of the body as the Rules fail to circumscribe the discretion that can be exercised by MIB in refusing registration to the body.
This self-regulating body can sit in appeal as well as issue guidance or advisories to the publishers, including requiring the issuance of apologies or inclusion of warning cards by publishers. However, decisions pertaining to the need to take action to delete or modify content, or instances where the publisher fails to comply with guidance or advisories of the body, are to be referred to the Oversight Mechanism under Rule 13 [Rules 12(5)(e) and 12(7)].
Additional concerns arise at Level III – the Oversight Mechanism under Rule 13. This Oversight Mechanism requires the MIB to form an Inter-Departmental Committee (‘IDC’), which shall consist of representatives from various other Ministries; the Chairperson
of this Committee is an Authorised Officer appointed by the MIB. Rule 14(2) stipulates that the Committee shall meet periodically to hear complaints arising out of grievances with respect to decisions taken at Level I or II, or complaints referred to it directly by the MIB. This may pose certain challenges — as the IDC, which is constituted and chaired by the MIB, and consists of individuals from other Ministries, will effectively also preside over complaints referred to it by the MIB. Furthermore, the recommendations of the IDC are made to the MIB itself for issuance of appropriate orders and directions for compliance. This has the potential to create a conflict of interest, and it violates the principle of natural justice that one cannot be a judge in their own case.
A bare perusal of the functions of Level II and Level III portrays that the powers bestowed upon the self-regulating body and the IDC overlap to a great extent. The self-regulating body may be rendered irrelevant as decisions regarding modification or removal of content or punishment of the publisher for failure to comply rest with the IDC. As the IDC is constituted by the MIB and its recommendations are referred to the MIB for issuance of orders to the publishers, for all intents and purposes, the Central Government has the final say in the online content that can be published by OTT platforms. This may make publishers wary and could have a chilling effect on freedom of speech and expression as content unfavourable to or critical of the government in power may be referred to the IDC/MIB and blocked.
The IDC has considerable discretion when it comes to its position as an Appellate Authority. More importantly, Rule 16 allows the Authorised Officer to block content under Section 69A of the IT Act in any case of emergency may have potential for misuse. To confer upon one individual appointed by the MIB the power to block content, without providing an opportunity for hearing to the publisher, is excessive and does not provide sufficient procedural safeguards; an issue that had been glossed over by the Supreme Court while upholding the constitutionality of Section 69A and Information Technology (Blocking Rules), 2009, in Shreya Singhal v Union of India.
In Hiralal M. Shah v The Central Board of Film Certification, Bombay, an order of the Joint Secretary to the Government of India directing a Marathi feature film to not be certified for public exhibition was challenged andthe Bombay High Court held that the Joint Secretary was not qualified to judge the effects of the film on the public, nor did he have the experience in examination of films. The High Court observed that allowing a bureaucrat to sit in judgement over the same would make “a mockery of the substantive right of appeal conferred on the producer”. According to the Court, it was difficult to comprehend why an informed decision by an expert body, i.e. the Film Certification Appellate Tribunal constituted under the Cinematograph Act, 1952, was to be replaced with the moral standards of a bureaucrat. A similar mechanism for regulation is being constructed by way of the 2021 Rules.
The three-tier mechanism stipulated by the 2021 Rules also raises the query as to why OTT platforms need to be regulated under the IT Act in the first place. If regulation is required, instead of adverting to the IT Act or the Cinematograph Act, 1952, which regulates traditional media, the regulatory system envisaged under the Cinematograph Act can be emulated to some extent in an alternate legislation solely governing OTT platforms. While the Cinematograph Act may be inadequate in terms of regulating new media, the current IT Rules stretch the boundaries of rule-making power of the Parliament by delving into an area of regulation that is not permissible under the IT Act.
The 2021 Rules are subordinate legislation, and it remains contested whether Part III of the Rules could have been promulgated using the rule-making power conferred on the Central Government under the IT Act. In the case of State of Tamil Nadu v P. Krishnamoorthy, the Supreme Court held that delegated legislation could be challenged if there was failure to conform to the statute under which it was made or if it exceeded the limits of authority conferred by the enabling Act, or if there was manifest arbitrariness or unreasonableness (to an extent where the Court may say that the legislature never intended to give authority to make such rules). With respect to the 2021 Rules, when such broad and arbitrary powers are being conferred on entities which could restrict fundamental rights under Articles 19(1)(a) and 19(1)(g), it should stem from a parent Act that lays down the objective and purpose that drives such regulation. The IT Act only regulates content to the extent of specific offences under Sections 66F, 67, 67A, 67B etc. that are to be judicially assessed, and Section 79 lays down guidelines that must be followed by intermediaries to avail of safe harbour. However, by introducing a distinct class of entities that must adhere to “digital media ethics” and must constitute their own regulation bodies, there is prima facie overreach by the 2021 Rules.
Are the IT Rules Violative of the Constitutional Rights of Free Speech and Expression? The three-tier mechanism under the 2021 Rules may have a chilling effect on creators and producers who may be disincentivized from publishing and distributing content that could potentially be considered offensive to even a small section of society. For example, even in absence of the 2021 Rules, the makers of Tandav agreed to make voluntary cuts and tendered an apology. Similarly, despite the partial stay of the 2021 Rules by the High Courts of Bombay and Madras, OTT platforms have stated that they will play it safe and exercise restraint over potentially controversial content. After the 2021 Rules, criticism that offends the sensibilities of an individual could potentially result in a grievance under Part III, ultimately leading to content being restricted.
In addition to this, the Code of Ethics appended to Part III states that a publisher shall “exercise due caution and discretion” in relation to content featuring the activities, beliefs, practices, or views of any racial or religious group. This higher degree of responsibility, which is ambiguous, may restrict the artistic expression of OTT Platforms. In Shreya Singhal v Union of India, the Supreme Court struck down Section 66A of the IT Act, holding that “where no reasonable standards are laid down to define guilt in a section which creates an offence and where no clear guidance is given to either law abiding citizens or to authorities and courts, a section which creates an offence and which is vague must be struck down as being arbitrary and unreasonable”. By stating that the Constitution did not permit the legislature “to set a net large enough to catch all possible offenders and leave it to the Court to step in and decide who could be held guilty”, the Supreme Court decisively ruled that a law which is vague would be void. Although a breach of the 2021 Rules does not have penal consequences, the Code of Ethics utilises open-ended, broad language whose interpretation could confer excessive discretion on the IDC in deciding what content to remove.
Under India’s constitutional structure, free expression can only be limited to the extent prescribed by Article 19(2), and courts scrutinise any restrictions of expression stringently due to the centrality of free speech and expression to the continued maintenance of constitutional democracy. In S. Rangarajan v P. Jagivan Ram, the Supreme Court observed that the medium of a movie was a legitimate mode to address issues of general concern. Further, the producer had the right to ‘think out’ and project his own message despite the disapproval of others; “it is a part of democratic give-and-take to which no one could complain. The State cannot prevent open discussion and open expression, however hateful to its policies”. The Apex Court further stated that it was the duty of the State to protect the freedom of expression. In K.A. Abbas v Union of India, the Supreme Court upheld the constitutionality of censorship under the Cinematograph Act, but cautioned that the censorship could only be in the interest of society, and that if it ventured beyond this arena, it could be questioned on the ground that a legitimate power was being misused.
In the aforementioned cases, the courts, while upholding censorship guidelines, acknowledged that the same had to be grounded within the four corners of Article 19(2), and the standard for censorship had to be that of an ordinary individual of common sense and prudence, and not that of a hypersensitive individual. However, in recent times, there have been regular outcries against films and web series which may offend the sensitivities of the certain sections of the public. It must be noted that the Government also has a duty to protect the speakers of unpopular opinions, and restrictions on the freedom of speech must only be a last resort when the situations provided for in Article 19(2) (e.g., public order or security of the State) are at stake. Such an approach would help allay the concerns of publishers who may otherwise either resist from creating content that could be potentially controversial or remove or modify scenes.
Conclusion A mechanism that risks the overregulation of content on OTT platforms, as well as grants significant discretion to the Ministry by way of formation of the IDC has the potential to dilute constitutional rights. Further, with India’s burgeoning influence as a producer of cultural content, such a rigid and subjective manner of regulation inhibits artistic expression and may have a chilling effect on the exercise of free speech and expression. Publishing of content on OTT platforms is different from traditional broadcasting in the way that it is made available to the public. Streaming of content on OTT platforms is based on an ‘on-demand’ principle where viewers actively choose the content they wish to consume, and thus it may require specialised regulation. A balanced approach should be adopted for regulation of OTT platforms which adhere to the values embedded in the Constitution as well as guidelines envisioned by the Supreme Court in judgements discussed above.
This blog was written with the support of the Friedrich Naumann Foundation for Freedom.
About the Author: The author is a 2021 graduate of National Law University, Delhi. She is currently working as a Research Associate with the Digital Media Content Regulatory Council.
Editor’s Note: This post is part of the Reflection Series showcasing exceptional student essays from CCG-NLUD’s Seminar Course on Technology & National Security Law. Along with a companion piece by Tejaswita Kharel, the two essays bring to a life a fascinating debate by offering competing responses to the following question:
Do you agree with the Supreme Court’s pronouncement in Anuradha Bhasin that access to the internet is an enabler of other rights, but not a fundamental right in and of itself? Why/why not? Assuming for the sake of argument, that access to the internet is a fundamental right (as held by the Kerala High Court in Faheema Shirin), would the test of reasonableness of restrictions be applied differently, i.e. would this reasoning lead to a different outcome on the constitutionality (or legality) of internet shutdowns?
Both pieces were developed in the spring semester, 2020 and do not reflect an updated knowledge of subsequent factual developments vis-a-vis COVID-19 or the ensuing pandemic.
INTRODUCTION
Although it did little to hold the government accountable for its actions in Kashmir, it would be incorrect to say that the judgment of Anuradha Bhasin v. The Union of India is a complete failure. This reflection paper evaluates the lessons learnt from Anuradha Bhasin and argues in favour of access to the internet as a fundamental right, especially in light of the COVID-19 pandemic.
EXAMINING INDIA’S LEGAL POSITION ON RIGHT TO INTERNET
Perhaps the greatest achievement of the Anuradha Bhasin judgement is the fact that the Government is no longer allowed to pass confidential orders to shut down the internet for a region. Moreover, the reasons behind internet shutdown orders must not only be available for public scrutiny but also be reviewed by a Committee. The Committee will need to scrutinise the reasons for the shutdown and must benchmark it against the proportionality test. This includes evaluating the pursuit of a legitimate aim, exploration of suitable alternatives, and adoption of the least restrictive measure while also making the order available for judicial review. The nature of the restriction, its territorial and temporal scope will be relevant factors to determine whether it is proportionate to the aim sought to be achieved. The court also expanded fundamental rights to extend to the virtual space with the same protections. In this regard, the Court made certain important pronouncements on the right to freedom of speech and expression. These elements will not be discussed here as they fall outside the scope of this paper.
A few months prior in 2019, the Kerala High Court recognised access to the internet as a fundamental right. Its judgement in Faheema Sharin v. State of Kerala, the High Court addressed a host of possible issues that arise with a life online. Specifically, the High Court recognised how the internet extends individual liberty by giving people a choice to access the content of their choice, free from control of the government. The High Court relied on a United Nations General Assembly Resolution to note that the internet “… facilitates vast opportunities for affordable and inclusive education globally, thereby being an important tool to facilitate the promotion of the right to education…” – a fact that has only strengthened in value during the pandemic. The Kerala High Court held that since the Right to Education is an integral part of the right to life and liberty enshrined under Article 21 of the Constitution, access to the internet becomes an inalienable right in and of itself. The High Court also recognised the value of the internet to the freedom of speech and expression to say that the access to the internet is protected under Art. 19(1)(a) of the Constitution and can be restricted on grounds consistent with Art. 19(2).
ARGUING IN FAVOUR OF RIGHT TO INTERNET
In the pandemic, a major reason why some of us have any semblance of freedom and normalcy in our lives is because of the internet. At a time when many aspects of our day to day lives have moved online, including education, healthcare, shopping for essential services, etc. – the fundamental importance of the internet should not even be up for debate. The Government also uses the internet to disseminate essential information. In 2020 it used a contact tracing app (Aarogya Setu) which relied on the internet for its functioning. There also exists a WhatsApp chatbot to give accurate information about the pandemic. The E-Vidya Programme was launched by the Government to allow schools to become digital. In times like this, the internet is not one of the means to access constitutionally guaranteed services, it is the only way (Emphasis Added).
In this context, the right of access to the internet should be read as part of the Right to Life and Liberty under Art. 21. Therefore, internet access should be subject to restrictions only based on procedures established by law. To better understand what shape such restrictions could take, lawmakers and practitioners can seek guidance from another recent addition to the list of rights promised under Art. 21- the right to privacy. The proportionality test was laid down in the Puttaswamy I judgment and reiterated in Puttaswamy II (“Aadhaar Judgement”). In the Aadhar Judgement when describing the proportionality for reasonable restrictions, the Supreme Court stated –
“…a measure restricting a right must, first, serve a legitimate goal (legitimate goal stage); it must, secondly, be a suitable means of furthering this goal (suitability or rational connection stage); thirdly, there must not be any less restrictive but equally effective alternative (necessity stage); and fourthly, the measure must not have a disproportionate impact on the right-holder (balancing stage).” –
This excerpt from Puttaswamy II provides as a defined view on the proportionality test upheld by the court in Anuradha Bhasin. This means that before passing an order to shut down the internet the appropriate authority must assess whether the order aims to meet a goal which is of sufficient importance to override a constitutionally protected right. More specifically, does the goal fall under the category of reasonable restrictions as provided for in the Constitution. Next, there must be a rational connection between this goal and the means of achieving it. The appropriate authority must ensure that an alternative method cannot achieve this goal with just as much effectiveness. The authority must ensure that the method being employed is the least restrictive. Lastly, the internet shutdown must not have a disproportionate impact on the right holder i.e. the citizen, whose right to freedom of expression or right to health is being affected by the shutdown. These reasons must be put down in writing and be subject to judicial review.
Based on the judgment in Faheema Sharin, an argument can be made how the pandemic has further highlighted the importance of access to the internet, not created it. The reliance of the Government on becoming digital with e-governance and digital payment platforms shows an intention to herald the country in a world that has more online presence than ever before.
CONCLUSION
People who are without access to the internet right now* – people in Kashmir, who have access to only 2G internet on mobile phones, or those who do not have the socio-economic and educational means to access the internet – are suffering. Not only are they being denied access to education, the lack of access to updated information about a disease about which we are still learning could prove fatal. Given the importance of the internet at this time of crisis, and for the approaching future, where people would want to avoid being in crowded classrooms, marketplaces, or hospitals- access to the internet should be regarded as a fundamental right.
This is not to say that the Court’s recognition of this right can herald India into a new world. The recognition of the right to access the internet will only be a welcome first step towards bringing the country into the digital era. The right to access the internet should also be made a socio-economic right. Which, if implemented robustly, will have far reaching consequences such as ease of social mobility, increased innovation, and fostering of greater creativity.
*Views expressed in the blog are personal and should not be attributed to the institution.
About the Author: The author is a 2021 graduate of National Law University, Delhi. She is currently working as a lawyer in Kathmandu, Nepal. Her interests lie in the area of digital rights, freedom of speech and expression and constitutional law.
Editor’s Note: This post is part of the Reflection Series showcasing exceptional student essays from CCG-NLUD’s Seminar Course on Technology & National Security Law. Along with a companion piece by Shreyasi Tripathi, the two essays bring to a life a fascinating debate by offering competing responses to the following question:
Do you agree with the Supreme Court’s pronouncement in Anuradha Bhasin that access to the internet is an enabler of other rights, but not a fundamental right in and of itself? Why/why not? Assuming for the sake of argument, that access to the internet is a fundamental right (as held by the Kerala High Court in Faheema Shirin), would the test of reasonableness of restrictions be applied differently, i.e. would this reasoning lead to a different outcome on the constitutionality (or legality) of internet shutdowns?
Both pieces were developed in the spring semester, 2020 and do not reflect an updated knowledge of subsequent factual developments vis-a-vis COVID-19 or the ensuing pandemic.
INTRODUCTION
The term ‘internet shutdown’ can be defined as an “intentional disruption of internet or electronic communications, rendering them inaccessible or effectively unusable, for a specific population or within a location, often to exert control over the flow of information”.1 It has become a tool used by States against residents of the country in question when they are faced with some imminent threat to law and order or a certain breakdown of law and order. It is used with the belief that a blanket shutdown of the Internet helps restrict misinformation, spreading of fake news, incitement of violence, etc. that could take place.
ANURADHA BHASIN JUDGEMENT: INTERNET AS ENABLER OF FUNDAMENTAL RIGHTS ENSHRINED UNDER THE CONSTITUTION OF INDIA
Due to the suspension of mobile and broadband internet services in Jammu and Kashmir on August 4, 2019 before the repeal of Article 370 of the Constitution of India, a petition was filed at the Supreme Court by Anuradha Bhasin (a journalist at Kashmir Times). The petition challenged the Government’s curb of media freedom in Jammu and Kashmir as a result of the blanket internet and communications shutdown. On 10th January 2020, the Supreme Court’s judgement in Anuradha Bhasin v. Union of India, held that the internet has been deemed as a means to realise fundamental rights under Article 19 of the Constitution. The Court’s decision specifically applied to the right to freedom of speech and expression and the right to carry on trade or businesses.
The Court did not explore or answer the question of whether access to the internet by itself is a fundamental right since it was not a contention by the counsels. However, the Court did state that since fundamental rights could be affected by the measures applied by authorities (which in this case was an internet shutdown), a lawful measure which could restrict these fundamental rights must be proportionate to the goal.
One reading of the Supreme Court’s decision in Anuradha Bhasin is that the case could act as an enabler which legitimises government-mandated internet shutdowns. Nevertheless, the Court does explicitly hold that the curtailment of fundamental rights affected by internet access restrictions must be proportionate. In pursuance of this restrictive measures need to be the least restrictive in nature. However, determining what constitutes the least restrictive measure is a subjective question and would vary on a case by case basis. There is no guarantee that internet shutdowns would not be the opted measure. .
Critiquing the Rationale of the Anuradha Bhasin Judgement
It is important to investigate why the Court was hesitant to not deem internet access as a fundamental right. One major reason could be due to the fact that access to the internet is not possible for all the citizens of India in the current situation in any case. At the time of writing this paper, approximately half of India’s population has access to and uses the internet. Where such a visible ‘Digital Divide’ exists, i.e. when half of the Indian population cannot access the Internet and the government has not yet been able to provide such universal access to the internet, it would not be feasible for the Court to hold that the access to internet is in fact a fundamental right.
If the Court were to hold that access to the internet is a fundamental right in the current situation, there would be a question of what internet access means ? Is access to the internet simply access to an internet connection? Or does it also include the means required in order to access the internet in the first place?
If it is just the first, then deeming access to the internet as a fundamental right would be futile since in order to access an internet connection, electronic devices (e.g. laptops, smartphones, etc.) are required. At a purely fiscal level, it would be improbable for the State to fulfil such a Constitutional mandate. Moreover, access to the internet would be a fundamental right only to those who have the privilege of obtaining the means to access the internet. The burden on the State would be too high since the State would be expected to not just provide internet connection but also the electronics which would be required in order to access the same. In either case, it does not seem feasible for access to the internet to be deemed as a fundamental right due to the practical constraint of India’s immense digital divide.
RIGHT TO INTERNET FOR CURRENT AND FUTURE CHALLENGES
At a future point where it is feasible for more people to access the internet in India (especially in rural/remote areas), it may be appropriate to deem access to the internet as a fundamental right. However, at this juncture to argue that the access to internet is a fundamental right (knowing that it is primarily accessible to more privileged segments) would be an assertion anchored on privilege. Therefore, as important as the internet is for speech and expression, education, technology, etc. the fact that it is not accessible to a lot of people is something for policymakers and wider stakeholders to consider.
This is especially important to look at in the context of COVID-19. Lockdowns and movement restrictions have increased remote work and accelerated online education. In order to work or study online, people must have access to both devices and the internet.
In this context a UNICEF Report (August 2020)observed that only 24% of Indian households had internet connection to access education and in November 2020 an undergraduate student died as a result of suicide since she was unable to afford a laptop. This provides macro and micro evidence of the blatant digital divide in India. Hence, it is not feasible to deem the right to access the internet as a fundamental right.
In any case, if we were to assume that the right to access the internet was a fundamental right as what was held on 19 September 2019 by the Kerala High Court in Faheema Shirin R.K v. State of Kerala, the issue of whether internet shutdowns are legal or not would still be contended. Article 19(2) provides certain conditions under which the right to freedom of speech and expression under Article 19(1)(a) can be reasonably restricted. Similarly, Article 19(6) of the Constitution provides that the right to carry on trade and business can be reasonably restricted in the interest of the general public. If access to the internet would be deemed as a fundamental right, it would be necessary to look at the scope of Articles 19(2) and 19(6) through a different lens. Nevertheless, such alteration would not yield a different application of the law. In essence, the Government’s restrictions on internet access would operate in the same way.
It is highly likely that Internet shutdowns would still be constitutional. However, there could be a change in the current stance to the legality of internet shutdowns. Situations wherein internet shutdowns would be legal may become narrower. There may even be a need for specific legislation for clarity and for compliance with the constitutional obligations.
CONCLUSION
Due to COVID-19, many people are unable to access education or work in the same way that was done before. Even courts are functioning online and with that the necessity to access the internet has never been stronger. The court in Anuradha Bhasin held that the internet was an enabler to rights under Articles 19(1)(a) and 19(1)(g). However, now with the added scope for the necessity to be able to use the internet as a medium of accessing education and as a medium to access justice (which has been recognised as a fundamental right under Article 21 and 14), lawmakers and Courts must evaluate whether the rising dependency on the access internet would in itself be a reason for internet access becomes crystallised as a fundamental right.
*Views expressed in the blog are personal and should not be attributed to the institution.
References:
Access Now, in consultation with stakeholders from around the world, launched its #KeepItOn campaign against internet shutdowns and developed the first international consensus on the definition of an internet shutdown in RightsCon 2016, available at https://www.rightscon.org/cms/assets/uploads/2016/07/RC16OutcomesReport.pdf.
The Patiala House court at Delhi has rejected R. K. Pachauri’s plea for an interim gag order against NDTV, Bennett Coleman and Co., and the India Today Group. The media houses had been made defendants in a defamation suit filed by him in 2016.
In 2015, an FIR had been filed against Pachauri by a woman employee of TERI (The Energy and Resources Institute, of which he was then the Chief) accusing him of sexual harassment. Following these allegations, several other women had spoken out about similar experiences while they had worked at the organization. The allegations and ongoing proceedings had received extensive coverage in the media.
Pachauri filed for defamation against multiple parties, including the media houses, one of the women who had spoken out, as well as her lawyer. He sought a gag order against the media houses, and damages of Rs. 1 Crore from the victim and her lawyer.
We have written previously about how suits such as these are in the nature of ‘SLAPP’ suits – Strategic Lawsuits Against Public Participation. These are cases where powerful individuals and corporations use litigation as a way of intimidating or silencing their critics. The defendants are usually media houses or individuals who are then forced to muster the resources to mount a legal defense. Even if they are able to secure a victory in Court, it is at the cost of a protracted and expensive process.
The court has now refused to grant an interim injunction against the media houses, noting the right of the public to be aware of the developments. It further noted that public figures can be held to a higher degree of scrutiny by the public. However, it has also held that further reportage must also carry Pachauri’s views, and indicate that the matter is still pending before the Court. The text of the order may be found here.
Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi
In 2016, Norwegian writer Tom Egeland, uploaded a post on Facebook, listing seven photographs that “changed the history of warfare”. The post featured the Pulitzer-winning image, ‘The Terror of War’, which depicts a naked nine-year-old running from a napalm attack during the Vietnam War. Facebook deleted the post, and suspended Egeland’s account.
A Norwegian newspaper, Aftenposten, while reporting on the suspension, used the same image on its Facebook page. The newspaper soon received a message from Facebook demanding that the image be either removed, or pixelated. The editor-in-chief refused to comply in an open letter to Mark Zuckerburg, noting his concern at the immense power Facebook wielded over speech online. The issue escalated when several Norwegian politicians, including the Prime Minister, shared the image on Facebook, and were temporarily suspended from Facebook as well.
Facebook initially stated that it would be difficult to create a distinction between instances where a photograph of a nude child could be allowed. However, due to widespread censure, the platform eventually decided to reinstate the image owing to its “status as an iconic image of historical importance.”
This incident brought to light the tricky position Facebook finds itself in as it attempts to police its platform. Facebook addresses illegal and inappropriate content through a mix of automated processes, and human moderation. The company publishes guidelines about what content may not be appropriate for its platform, called its ‘Community Standards.’ Users can ‘flag’ content that they think does not meet the Community Standards, which is then reviewed by moderators. Moderators may delete, ignore, or escalate flagged content to a senior manager. In some cases, the user account may be suspended, or asked to submit identity verification.
As evident from the ‘Terrors of War’ incident, Facebook has often come under fire for supposed ‘wrong’ moderation of content, as well as opacity in how its community review process comes to be applied. It has been argued that content that is evidently in violation of Community Standards is often not taken down, while content that should be safe is censored. For instance, Facebook courted controversy again, when it was accused of blocking content and accounts documenting persecution of the Rohingya Muslim community in Myanmar.
Closer home as well, multiple instances of Facebook’s questionable moderation practices have come to light. In October 2017, Raya Sarkar, a law student based out of the United States, had created what came to be called, the List. The List named over 70 prominent academics that had been accused of sexual harassment. The approach proved extremely controversial, sparking debates about due process, and the failure of institutional mechanisms to address harassment. Facebook blocked her account for seven days, which proved equally contentious. Sarkar’s account was restored only after Facebook staff in Palo Alto were contacted directly. Similar instances have been reported of seemingly arbitrary application of the Community Standards. In many cases accounts have been suspended, and content blocked without notice, explanation or recourse.
Content moderation inherently involves much scope for interpretation and disagreement. Factors such as context, as well as cultural differences, render it a highly subjective exercise. Algorithms don’t appear to have reached sufficient levels of sophistication, and there exist larger issues associated with automated censoring of speech. Human moderators are by all accounts burdened by the volume and the psychologically taxing nature of the work, and therefore prone to error. The way forward should therefore be first, to ensure that transparent mechanisms exist for recourse against the removal of legitimate speech.
In light of the ‘Terror of War’ incident, Facebook responded by updating its community standards. In a statement, it said that it would allow graphic material that would be “newsworthy, significant, or important to the public interest — even if they might otherwise violate our standards.” Leaked moderator guidelines in 2017 opened the company up to granularpubliccritique of its policies. There is evidently scope for Facebook to be more responsive and consultative in how it regulates speech online.
In June 2017, Facebook reached 2 billion monthly users, making it the largest social network, and a platform for digital interaction without precedent. It has announced plans to reach 5 billion. With the influence it now wields, it must also embrace its responsibility to be more transparent and accountable to its users.
Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi
Update: Deadline to apply extended to January 15, 2018!
The Centre for Communication Governance at the National Law University Delhi (CCG) invites applications for research positions in its Civil Liberties team on a full time basis.
About the Centre
The Centre for Communication Governance is the only academic research centre dedicated to working on the information law and policy in India and in a short span of four years has become a leading centre on information policy in Asia. It seeks to embed human rights and good governance within communication policy and protect digital rights in India through rigorous academic research and capacity building.
The Centre routinely works with a range of international academic institutions and policy organizations. These include the Berkman Klein Center at Harvard University, the Programme in Comparative Media Law and Policy at the University of Oxford, the Center for Internet and Society at Stanford Law School, Hans Bredow Institute at the University of Hamburg and the Global Network of Interdisciplinary Internet & Society Research Centers. We engage regularly with government institutions and ministries such as the Law Commission of India, Ministry of Electronics & IT, Ministry of External Affairs, the Ministry of Law & Justice and the International Telecommunications Union. We work actively to provide the executive and judiciary with useful research in the course of their decision making on issues relating to civil liberties and technology.
CCG has also constituted two advisory boards, a faculty board within the University and one consisting of academic members of our international networks. These boards will oversee the functioning of the Centre and provide high level inputs on the work undertaken by CCG from time to time.
About Our Work
The work at CCG is designed to build competence and raise the quality of discourse in research and policy around issues concerning civil liberties and the Internet, cybersecurity and global Internet governance. The research and policy output is intended to catalyze effective, research-led policy making and informed public debate around issues in technology and Internet governance.
The work of our civil liberties team covers the following broad areas:
Freedom of Speech & Expression: Research in this area focuses on human rights and civil liberties in the context of the Internet and emerging communication technology in India. Research on this track squarely addresses the research gaps around the architecture of the Internet and its impact on free expression.
Access, Markets and Public Interest: The research under this area will consider questions of access, including how the human right to free speech could help to guarantee access to the Internet. It would identify areas where competition law would need to intervene to ensure free, fair and human rights-compatible access to the Internet, and opportunities to communicate using online services. Work in this area will consider how existing competition and consumer protection law could be applied to ensure that freedom of expression in new media, and particularly the internet, is protected given market realities on the supply side. We will under this track put out material regarding the net neutrality concerns that are closely associated to the competition, innovation, media diversity and protection of human rights especially rights to free expression and the right to receive information and particularly to substantive equality across media. It will also engage with existing theories of media pluralism in this context.
Privacy, Surveillance & Big Data: Research in this area focuses on surveillance as well as data protection practices, laws and policies. The work may be directed either at the normative questions that arise in the context of surveillance or data protection, or at empirical work, including data gathering and analysis, with a view to enabling policy and law makers to better understand the pragmatic concerns in developing realistic and effective privacy frameworks. This work area extends to the right to be forgotten and data localization.
Role
CCG is a young and continuously evolving organization and the members of the centre are expected to be active participants in building a collaborative, merit led institution and a lasting community of highly motivated young researchers.
Selected applicants will ordinarily be expected to design and produce units of publishable research with Director(s)/ senior staff members. They will also be recommending and assisting with designing and executing policy positions and external actions on a broad range of information policy issues.
Equally, they will also be expected to participate in other work, including writing opinion pieces, blog posts, press releases, memoranda, and help with outreach. The selected applicants will also represent CCG in the media and at other events, roundtables, and conferences and before relevant governmental, and other bodies. In addition, they will have organizational responsibilities such as providing inputs for grant applications, networking and designing and executing Centre events.
Qualifications
The Centre welcomes applications from candidates with advanced degrees in law, public policy and international relations.
All candidates must preferably be able to provide evidence of an interest in human rights / technology law and / or policy / Internet governance/ national security law as well. In addition, they must have a demonstrable capacity for high-quality, independent work.
In addition to written work, a project/ programme manager within CCG will be expected to play a significant leadership role. This ranges from proactive agenda-setting to administrative and team-building responsibilities.
Successful candidates for the project / programme manager position should show great initiative in managing both their own and their team’s workloads. They will also be expected to lead and motivate their team through high stress periods and in responding to pressing policy questions.
However, the length of your resume is less important than the other qualities we are looking for. As a young, rapidly-expanding organization, CCG anticipates that all members of the Centre will have to manage large burdens of substantive as well as administrative work in addition to research. We are looking for highly motivated candidates with a deep commitment to building information policy that supports and enables human rights and democracy.
At CCG, we aim very high and we demand a lot of each other in the workplace. We take great pride in high-quality outputs and value individuality and perfectionism. We like to maintain the highest ethical standards in our work and workplace, and love people who manage all of this while being as kind and generous as possible to colleagues, collaborators and everyone else within our networks. A sense of humour will be most welcome. Even if you do not necessarily fit requirements mentioned in the two bulleted points but bring to us the other qualities we look for, we will love to hear from you.
[The Centre reserves the right to not fill the position(s) if it does not find suitable candidates among the applicants.]
Positions
Based on experience and qualifications, successful applicants will be placed in the following positions. Please note that our interview panel has the discretion to determine which profile would be most suitable for each applicant.
Programme Officer (2-4 years’ work experience)
Project Manager (4-6 years’ work experience)
Programme Manager (6-8 years’ work experience)
A Master’s degree from a highly regarded programme might count towards work experience.
CCG staff work at the Centre’s offices at National Law University Delhi’s campus. The positions on offer are for duration of one year and we expect a commitment for two years.
Remuneration
The salaries will be competitive, and will usually range from ₹50,000 to ₹1,20,000 per month, depending on multiple factors including relevant experience, the position and the larger research project under which the candidate can be accommodated.
Where candidates demonstrate exceptional competence in the opinion of the interview panel, there is a possibility for greater remuneration.
Procedure for Application
Interested applicants are required to send the following information and materials by December 30, 2017 to ccgcareers@nludelhi.ac.in.
Curriculum Vitae (maximum 2 double spaced pages)
Expression of Interest in joining CCG (maximum 500 words).
Contact details for two referees (at least one academic). Referees must be informed that they might be contacted for an oral reference or a brief written reference.
One academic writing sample of between 1000 and 1200 words (essay or extract, published or unpublished).
Shortlisted applicants may be called for an interview.
The Network Enforcement Act (NetzDG), a law passed on the 30th of June by the German Government operates to fine social media companies up to 50 million Euros – approximately 360 crore rupees – if they persistently fail to remove hate speech from their platform within 24 hours of the content being posted. Companies will have up to one week where the illegality of the content is debatable.
NetzDG is intended to hold social media companies financially liable for the opinions posited using their platform. The Act will effectively subject social media platforms to the stricter content standards demanded of traditional media broadcasters.
Why was the act introduced?
Germany is one the world’s strictest regulators of hate speech. The State’s Criminal Code covers issues of defamation, public threats of violence and incitement to illegal conduct, and provides for incarceration for Holocaust denial or inciting hatred against minorities. Germany is a country sensitive to the persuasive power of oratory in radicalizing opinion. The parameters of these sensitivities are being tested as the influx of more than one million asylum seekers and migrants has catalyzed a notably belligerent public discourse.
In response to the changing discourse, Facebook and a number of other social media platforms consented in December 2015 to the terms of a code of conduct drafted by the Merkel Government. The code of conduct was intended to ensure that platforms adhered to Germany’s domestic law when regulating user content. However, a study monitoring Facebook’s compliance found the company deleted or blocked only 39 percent of reported content, a rate that put Facebook in breach of the agreement.
NetzDG turns the voluntary agreement into a binding legal obligation, making Facebook liable for any future failure to adhere to it’s terms.
In a statement made following the law’s enactment, German Justice Minister Heiko Maas declared ‘With this law, we put an end to the verbal law of the jungle on the Internet and protect the freedom of expression for all… This is not a limitation, but a prerequisite for freedom of expression’. The premise of the position of Minister Maas, and the starting point for the principles that validate the illegality of hate speech, is that verbal radicalization is often time the precursor to physical violence.
As the world’s predominant social media platform, Facebook has curated unprecedented, and in some respects, unconditioned access to people and their opinions. With consideration for the extent of Facebook’s access, this post will focus on the possible effects of the NetzDG on Facebook and it’s users.
Facebook’s predicament
Regulatory methods
How Facebook intends to observe the NetzDG is unclear. The social media platform, whose users now constitute one-quarter of the world’s population, has previously been unwilling to disclose the details of their internal censorship processes. However given the potential financial exposure, and the sustained increase in user content, Facebook must, to some extent, increase their capacity to evaluate and regulate reported content. In response, Facebook announced in May that it would nearly double the number of employees tasked with removing content that violated their guidelines. Whether this increase in capacity will be sufficient will be determined in time.
However, and regardless of the move’s effectiveness, Facebook’s near doubling of capacity implies that human interpretation is the final authority, and that implication raises a number of questions: To what extent can manual censorship keep up with the consistent increase in content? Can the same processes maintain efficacy in a climate where hate speech is increasingly prevalent in public discourse? If automated censorship is necessary, who decides the algorithm’s parameters and how sensitive might those parameters be to the nuances of expression and interpretation? In passing the NetzDG, the German Government has relinquished the State’s authority to fully decide the answer to these questions. The jurisdiction of the State in matters of communication regulation has, to a certain extent, been privatised.
Censorship standards
Recently, an investigative journalism platform called ProPublicaclaimed possession of documents purported to be internal censorship guidelines used at Facebook. The unverified guidelines instructed employees to remove the phrase ‘migrants are filth’ but permit ‘migrants are filthy’. Whether the documents are legitimate is to some extent irrelevant: the documents provide a useful example of the specificity required where the aim is to guide one person’s interpretation of language toward a specific end – in this instance toward a correct judgment of legality or illegality.
Regardless of the degree of specificity, it is impossible for any formulation of guidelines to cover every possible manifestation of hate speech. Thereby interpreting reported content will necessarily require some degree of discretion. This necessity begs the question: to what extent will affording private entities discretionary powers of censorship impede freedoms of communication? Particularly where the discretion afforded is conditioned by financial risk and a determination is required within a 24-hour period.
Facebook’s position
Statements made by Facebook prior to the legislation’s enactment expressed concern for the effect the Act will have on the already complex issue of content moderation. ‘The draft law provides an incentive to delete content that is not clearly illegal when social networks face such a disproportionate threat of fine’ a statement noted. ‘(The Act) would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies’. Facebook’s reservation is telling: the company’s reluctance to adopt the role of moderator to the extent required alludes to the potential consequences of the liability imposed by the Act.
The problem with imposing this form of liability
Any decision made by a social media platform to censor user content will be supported by the anti-discrimination principles prescribed by the NetzDG. However, where the motivation behind discretionary decision-making shifts away from social utility towards financial management the guiding considerations become efficiency and risk minimisation. Efficiency and risk minimisation in this instance requires Facebook to either (i) increase capacity, which in turn results in an increased financial burden, or (ii) adopt guidelines that minimise exposure.
Seemingly the approach adopted by Facebook is to increase capacity. However, Facebook’s concerns that the Act creates financial incentives to adopt guidelines that minimise exposure are significant. Such concerns demonstrate an understanding that requiring profit motivated companies to do the work of the State within a 24-hour time frame will necessarily require a different set of parameters than those imposed on the regulation of oral hate speech. If Facebook, in drafting and applying those parameters, decides to err on the side of caution and, in some instances, censor otherwise legal content, that decision will have directly infringed the freedom of communication enjoyed by German citizens.
A democracy must be able to accommodate contrasting opinions if it purports to respect rights of communication and expression. Conversely, limitations on rights enjoyed may be justified if they benefit the majority. The NetzDG is Germany’s recognition that the nature of online communication – the speed at which ideas promulgate and proliferate, and the disconnect between comment and consequence created by online anonymity – require the existing limitations on the freedom of communication be adapted. Whether instances of infringement, are warranted in the current climate is a difficult and complicated extension of the debate between the utility of regulating hate speech and the corresponding consequences for the freedoms of communication and expression. The decision to pass the NetzDG suggests the German Government considers the risk of infringement is acceptable when measured against the consequences of unfettered hate speech.
Public recognition that NetzDG poses a risk is important. It is best practice that within a democracy, any new limit to liberty, oral or otherwise, be questioned and a justification given. Here the justification seems well-founded. However the answers to the questions posed by sceptics may prove telling as Germany positions itself at the forefront of the debate over online censorship.
(William is a student at the University of Melbourne and is currently interning at CCG)