Will Fake News Decide the World’s Largest Election?

Varsha Rao

2.jpg

Social media platforms like Facebook and Twitter have been hauled up by many an authority for their role as conduits in the dissemination of ‘fake news’, including the United States Senate and India’s Parliamentary Standing Committee on Information Technology.

In the run-up to the Lok Sabha Elections, social media and messaging platforms have put in place strategies to check attempts at spreading misinformation.

To prevent voter suppression (acts aimed at reducing voter turnout), Facebook bans misrepresentations about the voting process, such as claims that people can vote online. The  launch of political advertisement archives in India by Facebook, Google and Twitter has also contributed to the fight against misinformation campaigns.

Fact-checkers are the second line of defence after artificial intelligence and machine learning has had its turn in identifying potential pieces of false news. The judgement and research ability of the fact-checker determines the rating that the highlighted content will receive as well as its priority on platforms, such as on Facebook’s News Feed.

However, entities involved in misinformation campaigns remain undeterred.

DISCOVERED! FAAMG IS BEHIND THE GLOBAL CONSPIRACY AGAINST PRIVACY

1.jpg

Cutting Off a Hydra’s Head: Challenges Befalling Social Media Intermediaries

The existence of multiple forms of social media and competing platforms allows malicious actors to engage in ‘platform shopping’ and utilize methods which throw up fewer obstacles.

An example of this is podcasts or series of audio files. Scrutinizing the content of podcasts for hate speech and misinformation is much more difficult than identifying buzzwords in articles. When platforms disseminating podcasts do not have transparent policies on taking down content, there is no guarantee that flagging a podcast for problematic content will contain its reach. Furthermore, since podcasts are in a nascent stage of popularity, platforms may not have the resources or funding to engage in extensive fact-checking or hire third party fact-checkers.

The emergence of ‘deepfakes’ or artfully doctored photos and videos has contributed to the flow of misinformation on social media as well. Lack of awareness regarding the existence and popularity of ‘deepfakes’ along with the difficulty in spotting manipulations in the footage exacerbates its ability to influence the target audience.

Social media companies are well aware that they are going up against determined actors with the capacity to generate creative solutions on the fly. One such example was observed by an American reporter on a chat channel, Discord. When a Twitter user complained about Twitter’s proactive measures in deleting accounts connected to voter suppression attempts, another user suggested the use of Snapchat filters on photos found online when creating a fake account to evade reverse image searches.

It does not help that certain challenges faced by social media companies have no immediate solutions. In its press releases, Facebook has highlighted the scarcity of professional fact-checkers worldwide, the time it takes for a complex news item to be scrutinized and the lack of meaningful data in local dialects to aid machine learning.

Furthermore, while solutions have been implemented by social media companies in good faith, they have been shown to remain unsuccessful in tackling the problem as a whole. A reporter for The Atlantic drew attention to a loophole in Facebook’s Ad Library authentication process, an otherwise effective dragnet in a sea of insidious advertising. By setting up a limited liability company to act as the publisher of the ad, special interest groups can obscure their identity and continue to sponsor ads on Facebook. The inability to predict users’ behavioural tendencies may also lead to the failure of the solution, such as in the case of WhatsApp where the labelling of forwarded messages may not encourage the recipient to question the legitimacy of the message if the recipient has faith in the credibility of the sender.

While scrutinizing the strategies offered by social media platforms and other intermediaries, it is important to keep in mind that the problem of ‘fake news’ is not a new phenomenon. The introduction of the printing press in the 15th century also unleashed a wave of ‘fake news’ regarding witches and religious fanaticism which would be printed alongside scientific discoveries. Thus, while social media may have amplified its reach – much like a microphone does in the hands of a speaker – it is ultimately the individual spewing vitriol that is the true culprit. The burden of generating solutions cannot be solely borne by the intermediaries.

ELECTIONS CANCELLED: PARLIAMENTARIANS TO PICK PM BASED ON ANCIENT PROPHECY

Is the Government’s Heart in the Right Place: Misplaced Solutions for an Insidious Problem

Unfortunately, as part of their contribution to curb the dissemination of ‘fake news’, the Government has made scarce headway. In April 2018, a directive was issued by the Ministry of Information and Broadcasting stating that the accreditation of journalists found to have generated or circulated ‘fake news’ would be suspended for a time period to be determined according to the frequency of violations and would be cancelled in case of a third violation. The guidelines were immediately withdrawn on the direction of the Prime Minister’s Office after the Government was heavily criticized by journalists and media bodies for attempting to muzzle the free press.

In December 2018, the Ministry of Electronics and Information Technology published the draft Information Technology [Intermediaries Guidelines (Amendment) Rules], 2018, which requires – inter alia – that intermediaries enable tracing of the origin of information and deploy automated tools to proactively identify and remove unlawful content. WhatsApp, a platform with end-to-end encryption, took a stand against breaking encryption and pointed to privacy and free speech concerns to justify their position.

As countries attempt to block the dissemination of ‘fake news’ on the internet and regulate the flow of information on social media platforms, it is imperative to ensure that overbroad definitions and strategies do not end up promoting political censorship.

China’s crackdown on ‘online rumours’ since 2013 is an example of the State controlling information flow. Not only must ‘rumours’ – including content undermining morality and the socialist system – be removed by social network operators, but also their publication could result in a jail term of 3 years for the creator. The licenses required by social media networks to operate in China may be held hostage if their interpretation of ‘rumours’ does not align with the Chinese authorities. This incentivizes overly cautious intermediaries to block or report content that seems ‘fake’ by the Government’s standards, thus leading to collateral censorship.

POLITICAL LEADER RECEIVES SHOCKING TEXT MESSAGE ABOUT ELECTION RESULTS. YOU WON’T BELIEVE YOUR EYES!

‘What Is’ versus ‘What Could Have Been’: The Pitfalls of Election Campaigning  

The lack of significant engagement and progress on the ‘fake news’ and misinformation front is certainly a cause for concern as it points to a lack of political will.

The misuse of social media and messaging platforms by the ruling party as well as the Opposition has been widely reported by news outlets. BJP President allegedly told the party’s IT cell volunteers that the 32 lakh-strong WhatsApp groups allow the BJP to deliver any message to the public, even if it is fake. Last month, Facebook took down pages connected to the Congress IT cell as well as an IT firm behind the NaMo app for coordinated inauthentic behaviour and spam. WhatsApp’s head of communications has also interacted with political parties to highlight that WhatsApp is not a broadcast platform and accounts engaging in bulk messaging will be banned.

For political parties, there is much to gain by manipulating public opinion in a country where elections are tightly-contested along narrow margins, and election results have a long-lasting impact on the intricate fabric of national identity. Back in 2013, the Internet and Mobile Association of India (IAMAI) had gathered from a Social Media survey conducted in 35 Indian cities that the votes of only 3-4% of social media users could be swung. Of course, this was before the 2016 U.S. Presidential Elections which saw social media disinformation campaigns being executed with a renewed vigour.

As a starting point, political parties could have agreed to refrain from executing misinformation campaigns and instead, opted to encourage healthy debate based on verifiable facts to influence the electorate. Mud-slinging and propaganda campaigns are tactics that could potentially win elections. However, political candidates cannot ignore the lethal consequences of ‘fake news’ in India and carry on as if it is business as usual, especially when ‘fake news’ has become a life-and-death issue.

In the run-up to its federal elections in 2017, major political parties in Germany entered into a ‘gentleman’s agreement’ to disregard information leaked as a result of cyberattacks instead of exploiting it. An agreement by Indian political parties on the ethics that ought to govern social media use would have underscored the same spirit.

Instead of attempting to increase the burden on intermediaries, the Government could also have undertaken extensive digital literacy campaigns to build resilience against attempts at manipulation, be it domestic or foreign. The campaigns could have been structured to highlight the techniques by which false information is propagated to manipulate the psychology of the voter.

Social media platforms, political parties and the Election Commission form a trinity that shares the responsibility of protecting the authenticity of content informing a voter’s choice. While the degree of responsibility may be different, without collaboration, the goal will remain unachievable. The shortcomings of the political parties do not absolve the social media intermediaries of their responsibility. It took Twitter until half of polling had been completed to launch an anti-voter suppression feature on the microblogging platform. There have also been multiple instances of ‘fake news’ being taken down on other social media platforms but remaining in circulation on Twitter.

The impact of misinformation campaigns on the Lok Sabha elections will be uncovered only once the elections come to an end. The best-case scenario it that it has a negligible impact on the election result. The worst-case scenario? The influence is so pervasive that we will follow in the footsteps of the U.S. and take a minimum of two years to uncover its reach.

Regardless of what ultimately happens, perhaps there is one thing we can all agree on – not enough has been done to protect this “festival of democracy” from being manipulated.

3

(Varsha is a researcher with the Centre for Communication Governance at National Law University Delhi.)

Advertisements

Securing Electoral Infrastructure: How Alert is India’s Election Chowkidaar?

Varsha Rao

With the publication of Special Counsel Robert Mueller’s much-awaited report on Russian interference in the United States Presidential Elections of 2016, the threat of hacking and misinformation campaigns to influence elections is taking centre-stage yet again. Closer to home, the discussion has become more pertinent than ever before. In a democratic process of gigantic proportions, 900 million Indians across 543 constituencies are expected to cast their vote in 7 phases to elect a Government for the next five years.

The gravity and significance of the ongoing General Elections to the Lok Sabha thus begs the question – how susceptible is the world’s largest democracy to cyber interference?

Interfering in an election in the digital age involves a two-pronged attack – firstly, by influencing the political inclination of the electorate via misinformation campaigns on social media platforms, and secondly, by manipulating the electoral infrastructure itself. This article will focus on the latter, more specifically, the infrastructure and processes administered by the Election Commission of India.

Voter Registration Databases and Election Management Systems (EMS)

Unfettered access to voter registration databases arms malicious actors with the ability to alter or delete the information of registered voters, thereby impacting who casts a vote on polling day. Voter information can be deleted from the electoral rolls to accomplish en-masse voter suppression and disenfranchisement along communal lines in an already polarized voting environment. The connectivity of voter databases to various networks for real-time inputs and updates make them highly susceptible to cyberattacks.

The manipulation of election management systems (EMS) can have an even wider impact on the electoral process. Gaining access to the Election Commission’s network would be akin to creating a peephole into highly confidential data ranging from deployment of security forces to the tracking of voting machines.

Election Commission staff can be targeted via phishing attacks in a manner similar to the cyberattacks executed during the 2016 U.S. Elections. Classified documents of the U.S. National Security Agency (NSA) as well as Special Counsel Robert Mueller’s report confirm that hackers affiliated with the Russian government targeted an American software vendor enlisted with maintaining and verifying voter rolls. Thereafter, posing as the vendor, the hackers successfully tricked government officials into downloading malicious software that creates a backdoor into the infected computer.

The Election Commission has made proactive attempts to improve the cyber hygiene of its officials by conducting national and regional cybersecurity workshops and issuing instructions regarding vigilance against phishing attacks. Furthermore, Cyber Security Regulations have been issued to regulate the officers’ online behaviour. A Chief Information Security Officer (CISO) was appointed in December 2017 at the central level and Cybersecurity Nodal Officers have been appointed in at the State-level.

The Election Commission has also addressed spoofing attempts by taking down imposter apps from mobile phone app distribution platforms. According to newspaper reports, the Election Commission has carried out a third-party security audit of all poll-related applications and websites, and enabled Secure Sockets Layer (SSL) on the Election Commission website to encrypt information exchanged between a user’s browser and the website.

There is no doubt that cybersecurity risks are constantly evolving, and it remains imperative for the Election Commission to conduct systematic and periodic vulnerability analyses in collaboration with security auditors to update Election Commission systems and software.

Electronic Voting Machines

An EVM is made up of two units – a Control Unit and a Balloting Unit, linked by a five-metre long cable. The Presiding/Polling Officer uses the Control Unit to release a ballot. This allows the voter inside the voting compartment to cast their vote on the Balloting Unit by pressing the button labelled with the candidate name and party symbol of their choice. An individual cannot vote multiple times as the machine is locked once a vote is recorded, and can be enabled again only when the Presiding Officer releases the ballot by pressing the relevant button on the Control Unit.

While the Election Commission has reiterated time and again that EVMs are tamper-proof, the machines have come under criticism from security researchers and computer scientists. To defend the integrity of EVMs, the Election Commission frequently cites the simplistic design of the machine. The EVMs are battery-operated in order to be functional in parts of the country that do not have electricity access. Additionally, they are not connected to any online networks nor do they contain wireless technology, thereby mitigating the possibility of remote software-based attacks. While these factors certainly reduce the potential for EVM hacking, they do not justify the Election Commission’s unshakeable belief that EVMs are infallible.

The most explosive demonstration of EVMs being susceptible to hacking attempts was carried out all the way back in 2010 by a Hyderabad-based technologist, Hari K. Prasad in collaboration with J. Alex Halderman, an American computer science professor and Rop Gonggrijp, a hacker who campaigned to decertify EVMs in the Netherlands.

Various personnel interact with the EVM, right from the beginning of the supply chain to the officials and staff responsible for its storage and security before and after polling. In a paper published by Hari Prasad and his team, two methods of physical tampering were tested and demonstrated. The first method is to replace the Control Unit’s display board which is used during the counting process to show the number of votes received by candidates. The dishonest display board, on receiving instructions via Bluetooth, would have the ability to intercept the vote totals and display fraudulent totals by adjusting the percentage of votes received by each candidate. The second method involves attaching a temporary clip-on device to the memory chip inside the EVM to execute a vote-stealing program in favour of a selected candidate.

The physical security of the EVM takes on manifold importance in light of the above. The Election Commission has strict procedures in place to transport and store the machines, employing GPS and surveillance technology. Storage spaces known as ‘strong rooms’ having a single-entry point, double lock system and CCTV coverage are utilised. However, there have been frequent news reports about cases of EVM theft, strong room blackouts as well as unauthorized access.

The Election Commission has argued that since mock polls are conducted before official polling commences, any malfunctions or tampering attempts will be detected before it can impact the electoral process. However, this countermeasure does not address the possibility of attackers programming their tampering devices to kick into gear only after the EVM has recorded a set number of votes, thereby skipping over any mock poll entries.

Furthermore, while the source-coding or the writing of the software onto the EVM chip is done by Indian public sector undertakings (PSUs), the microchips themselves are imported from the United States and Japan. Since the EVM chip is a one-time programmable chip, it can neither be read, copied nor overwritten. The benefit of this feature is that they cannot be re-programmed by malicious actors. However, the masking also has a downside – in the event that any vulnerabilities are inserted into the chip or source code during the movement of the machine components along the supply chain, it may not be possible to detect the vulnerability.

Introducing a Voter Verifiable Paper Audit Trail (VVPAT) system was widely touted as second layer of verification to catch any EVM malfunctions. It was only at the insistence of the Supreme Court that the Election Commission agreed to roll out EVMs with VVPATs for the ongoing General Elections.

When a vote is cast, the battery-operated VVPAT system prints a slip containing the serial number, name and symbol of the candidate, which is available for viewing through a transparent window for a few seconds. Following that, the slip falls into a sealed drop box.

An effective VVPAT audit is an important solution to the vulnerabilities plaguing EVMs. The Election Commission’s procedure for VVPAT audit involved counting of VVPAT slips in one polling booth per Assembly segment for the General Elections. The Supreme Court had to intervene again – at the insistence of Opposition parties – for the Election Commission to increase the audit from one EVM to five per Assembly segment. The Court did not accept the Opposition parties’ plea to have 33-50% votes verified.

The call for extensive VVPAT slip audits has been an ongoing battle, with bureaucrats, politicians and experts on the frontlines. Former bureaucrats had written to the Election Commission to increase the audit sample size to 50 machines per 1 lakh booths instead of 5-6 machines. A former Chief Election Commissioner has proposed that the two runners-up in a constituency may be the given the option to randomly select two EVMs each for a VVPAT slip audit – a procedure similar to the Umpire Decision Review System in cricket. Another proposed method known as the Risk Limiting Audit requires the ballots to be audited until a pre-determined statistical threshold of confidence is met.

The resistance displayed by the Election Commission to introducing VVPAT slip audits as well as expanding the sample size of the audits is alarming. The Chief Justice of India even reprimanded the Election Commission for “insulat[ing] itself from suggestion for improvement”. Unsurprisingly, the Court had to reassure the Election Commission that in making recommendations to improve the electoral process, it was not casting aspersions on the functioning of the body.

While it is commendable that the Election Commission has embraced the implementation of technology like EVMs in the electoral process, it is becoming clear that it has not incorporated the tradition of vulnerability research and software patching to prevent further exploits. Security researchers must be provided time and unfettered access to test the efficacy and security offered by EVMs. Hacking challenges should not be restricted to EVM replicas or superficial tinkering on the external body of the EVM.

It is understandable for an authority like the Election Commission to focus on protecting the integrity of the institution as well as the election infrastructure. However, pointing out flaws in the EVM technology is not equivalent to an attack on the institution of the Election Commission. While the entire process of elections is built around trust – be it trust in the method of casting votes or trust in the authority tabulating the votes – it is the responsibility of those in whom the trust of the electorate is reposed to ensure transparency at every stage and welcome public scrutiny, especially when new and complex technology is being employed.

(Varsha is a researcher with the Centre for Communication Governance at National Law University Delhi.)

Celebrating One Year of the Puttaswamy Judgment (August 24, 6.00 pm, IIC)

Celebrating One Year of the Justice K.S. Puttaswamy v. Union of India Judgment

August 24, 2018

6.00 pm onwards

organized by

Indian Council for Research on International Economic Relations

&

Centre for Communication Governance at National Law University Delhi

at

Conference Hall – 2 | India International Centre | Max Mueller Marg | New Delhi

#DelhiTechTalks

Background

In a landmark decision on August 24th last year, a nine-judge bench of the Supreme Court unanimously upheld the fundamental right to privacy.

More recently, a committee headed by Justice B.N. Srikrishna submitted a report and a draft bill on data protection. Public Comments on the bill are due by early next month. The Supreme Court’s judgment on the Aadhaar challenge is imminent. There have also been other developments in this context such as RBI’s data localisation directive, the DNA profiling bill, the draft information security in health care bill, data localisation provisions in the e-commerce policy and the government’s recently withdrawn proposal to create a social media communications hub.

To commemorate the anniversary of the judgment and discuss the recently released Data Protection Bill, and related issues we are hosting this discussion on privacy and data protection.

Timings Programme
6.00 – 6.15 pm Tea & Coffee
6.15 – 6.20 pm Initial Remarks
6.20 – 6.40 pm State of Privacy in India & the Challenges to Realising Puttaswamy’s Promise

Dr. Usha Ramanathan, Independent Law Researcher

6.40 – 7.30 pm Data Protection for a Free and Fair Digital Economy

The recently released draft data protection framework recognises the need to balance privacy and a free and fair digital economy. It articulates some of the benefits of big data and encourages its growth. However, it has been argued that compliance with such a framework will require the current business models to change. Additionally, stringent provisions mandating the jurisdiction for processing of personal data, and wide discretion given to the central government, and regulatory authority raise questions of its impact on the second largest online market in the world, home to nearly 500 million active Internet users and business located in it.

Moderated by: Mansi Kedia, Consultant, Indian Council for Research on International Economic Relations (ICRIER)

Madhulika Srikumar, Associate Fellow, Observer Research Foundation

Malavika Raghavan, Project Head – Future of Finance Initiative, Dvara Research

Nehaa Chaudhari, Public Policy Lead, TRA Law

Smriti Parsheera, Technology Policy Researcher, National Institute of Public Finance and Policy (NIPFP)

7.30 – 8.20 pm Legacy of the Justice K.S. Puttaswamy v. Union of India Judgment

The court pronounced a landmark judgment last year, however it still needs to be examined whether judicial and legislative developments in India over the past year have upheld the principles enumerated in it. This includes the proposed data protection framework, and ongoing hearings on the right to be forgotten, Aadhaar and Section 377 and adultery, among others.

Moderated by: Apurva Vishwanath, Special Correspondent, ThePrint

Kritika Bhardwaj, Lawyer, Supreme Court of India

Shweta Mohandas, Policy Officer, Centre for Internet & Society

Smitha K. Prasad, Civil Liberties Lead, Centre for Communication Governance at National Law University Delhi

Ujwala Uppaluri, Lawyer, Supreme Court of India

8.20 pm onwards Dinner

11 Indian States have Shutdown the Internet 37 times since 2015

Mobile Internet services have been suspended in Kashmir for the past 87 days. There has been a sharp increase in both the frequency and duration of ICT shutdowns in the past two years. We have been tracking ICT shutdowns since 2012 as part of our research for the Freedom on the Net- India Report (2014 & 2015).

In 2012 there was only one incident of ICT shutdown (three including the Republic and Independence days). The Jammu & Kashmir government blocked telecom services in order to prevent users from uploading or downloading the film Innocence of Muslims. This is a rare instance of a shutdown in which the government order is publicly available. Most of the Internet shutdowns have been without any procedural transparency.

Jammu & Kashmir has been suspending mobile Internet on the Republic and Independence Days since 2005, with Republic Day 2015 being an exception. In 2013 there were three instances of ICT shutdowns (four including Republic Day) – all in the state of Jammu & Kashmir.

In 2014, Jammu & Kashmir blocked ICT services four times (including on Republic Day and Independence Day) and the State of Gujarat once. Gujarat ordered the block after 2 people were stabbed in Vadodara in clashes between two communities subsequent to the circulation of an image on Facebook which was considered offensive to Islam.

Since 2014 there has been a massive spike in the incidents of ICT shutdowns in India. We found that 11 Indian states have shutdown the Internet 37 times since 2015 with 22 of those instances in the first nine months of 2016.

We have written extensively on Internet Shutdowns in the past year. For an analysis of the legal issues in case of Internet shutdowns please see:

Demarcating a safe threshold

The Anatomy of Internet Shutdowns – I (Of Kill Switches and Legal Vacuums)

The Anatomy of Internet Shutdowns – II (Gujarat & Constitutional Questions)

The Anatomy of Internet Shutdowns – III (Post Script: Gujarat High Court Verdict)

Internet Shutdowns: An Update

EU Code of Conduct on Countering Illegal Hate Speech Online: An Analysis

By Rishabh Bajoria

The Code

On 31st May, the European Commission (EC) announced a new Code of Conduct for online intermediaries. This Code was formulated after mutual agreement between the EC and Facebook, Microsoft, Google (including YouTube) and Twitter.[1] It targets the prompt removal of hate speech online through intermediaries. The EC stated:

While the effective application of provisions criminalising hate speech is dependent on a robust system of enforcement of criminal law sanctions against the individual perpetrators of hate speech, this work must be complemented with actions geared at ensuring that illegal hate speech online is expeditiously acted upon by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame.”[2]

It later clarifies that a notification must not be “insufficiently precise” or “inadequately substantiated”. Intermediaries are obliged “to review the majority of valid notifications” in “less than 24 hours and remove or disable access” to the content. They must review the notifications against the touchstone of their community rules and guidelines, and “national laws”, wherever necessary.

Reasons

The Code is understood to be in response to rising anti-Semitic and pro-Islamic State commentary on social media. Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality, said, “The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred.[3]

It is noteworthy that the intermediaries are American. This could be a way to avoid any jurisdictional conflict. For example, in Licra et UEJF v Yahoo! Inc and Yahoo! France, Yahoo! refused to comply with a French Court’s order. The order imposed liability on Yahoo! for its failure to disable access to sale of Nazi memorabilia on its website. This was a crime in France. However, Yahoo! contended that because its servers were located in the United States, the order was inapplicable. Subsequently, the U.S. District Court for the Southern District of New York in Yahoo! Inc. v. La Ligue Contre Le Raisme et L’Antisemitisme held Yahoo! to be a mere distributor. Hence, it could only be held liable if it had notice of the content.[4] This Code will supplement Articles 12-14 of the E-Commerce Directive 2000/31/EC. These Articles preclude intermediaries from liability if they disable content “expeditiously”, after receiving a “notice” of it. However, standards are not provided for “expeditious” or “notice. This Code clarifies these ambiguous terms for the intermediaries, which are otherwise defined by domestic legislatures[5]. Moreover, because such intermediaries have agreed to abide by the E-Commerce Directive and the Code of Conduct, such a jurisdictional issue will not arise.

Problems

This Code forces intermediaries to judge the legality of content. Once intermediaries are notified of the content, they are obliged to investigate and determine if the speech should be deleted. Twitter’s Head of Public Policy for Europe, Karen White, commented: “Hateful conduct has no place on Twitter and we will continue to tackle this issue head on alongside our partners in industry and civil society. We remain committed to letting the Tweets flow. However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.[6] Such a notice and takedown regime is problematic because this distinction is not always “clear”. There remains no universal consensus on the definition of hate speech. To evaluate if speech comes under this category, Courts across jurisdictions look at a number of factors:

  1. Severity of the speech
  2. Intent of the speaker
  3. Content or form of the speech
  4. Social context in which the speech is made
  5. Extent of the speech (its reach and the size of its audience)
  6. Likelihood or probability of harm occurring[7]

The last two criteria are not analysed for speech which incites hatred. Hate speech, per se, is an inchoate crime. These factors are analysed cumulatively. Courts look to balance the value of the speech against the State’s positive obligations of maintaining public order or protecting the rights of others. Former UN Special Rapporteur for Freedom of Expression Frank La Rue has argued that private intermediaries should not be forced to carry out censorship. They are not equipped to account for the various factors involved in determining the legality of speech.[8] Unlike a judiciary, evaluations by private intermediaries are often opaque. They provide none of the legal safeguards a trial does, such as a right to appeal.[9] The mandate to censor speech within 24 hours of notification exacerbates this problem.

Proponents of this code might argue that intermediaries engage in self-censorship according to their Community Guidelines in status quo.[10] Therefore, an extension of this obligation to domestic legislation is not harmful. However, intermediaries are profit oriented private corporations. The legal obligation placed by the code of conduct is accompanied by liability if breached. This threat of liability will cause them to err on the side of caution and over censor speech. Professor Seth Kreimer, a Constitutional and Human Rights Law expert, argues that intermediaries know that potential liability will outweigh additional revenue offered by a user.[11] This is likely to have a chilling effect on online speech[12]. Hence, the Indian Supreme Court in Shreya Singhal v Union of India[13] rejected the “private notice and takedown” standard. It held that an intermediary will only be liable if it fails to comply with a judicial order stating the illegality of content.

For example, assume someone posts a controversial tweet. Presumably, this would be flagged by users for removal. Even if the notice is “valid”, and not “insufficiently precise”, Twitter will still have to investigate this before taking it down. In status quo, big corporations like Twitter usually have a legal team for this. However, this legal team will have to evaluate, within 24 hours, if the speech is inciting violence or hatred. For this, it will have to analyse its content and severity, the intent of the speaker and the social context. It will also have to scrutinize the causality between the speech and potential violence. This is a nearly impossible task. Moreover, it does not know if the judiciary will render the same verdict. So, if it continues to disseminate the speech in good faith, and the judiciary later deems it illegal, it can be held liable. This threat will make it remove speech, wherever the “distinction between freedom of expression and conduct that incites violence[14] is not “clear[15]. In the face of millions of such requests, intermediaries cannot be expected to make a sound legal evaluation. As a result, society may be deprived of potentially valuable speech.

Thus, this Code effectively mandates private censorship. Intermediaries will not be able to make nuanced evaluations of whether the speech incites hatred or violence within 24 hours. However, they can be liable, even if they do not delete content in good faith if a Court later finds it impermissible. The fear of this liability will make intermediaries err on the side of caution and over-censor. Hence, this Code is a recipe for a chilling effect online. Thus, while preventing terrorist propaganda is a legitimate aim, this response will disproportionately restrict freedom of speech and expression online.

[1] Code of Conduct on Countering Illegal Hate Speech Online, available at http://ec.europa.eu/justice/fundamental-rights/files/hate_speech_code_of_conduct_en.pdf; “European Commission’s Hate Speech Deal With Companies Will Chill Speech”, available at https://www.eff.org/deeplinks/2016/06/european-commissions-hate-speech-deal-companies-will-chill-speech.

[2] “European Commission and IT Companies announce Code of Conduct on illegal online hate speech”(Press Release) , available at http://europa.eu/rapid/press-release_IP-16-1937_en.htm,.

[3]Ibid.

[4] Omer, Corey. “Intermediary Liability for Harmful Speech: Lessons from Abroad.” Harv. J. Law & Tec 28 (2014): 289-593.

[5] Verbiest, Thibault, Gerald Spindler, and Giovanni Maria Riccio. “Study on the liability of internet intermediaries.” Available at SSRN 2575069 (2007).

[6] “European Commission and IT Companies announce Code of Conduct on illegal online hate speech”(Press Release) , available at http://europa.eu/rapid/press-release_IP-16-1937_en.htm.

[7] Toby Mendel, Study on International Standards Relating to Incitement to genocide or Racial Hatred, a study for the UN Special Advisor on the prevention of Genocide, April 2006, available at http://www.concernedhistorians.org/content_files/file/TO/239.pdf; “Towards an interpretation of Article 20 of the ICCPR: Thresholds for the prohibition of incitement to hatred”, available at http://www.ohchr.org/Documents/Issues/Expression/ICCPR/Vienna/CRP7Callamard.pdf..

[8] HRC, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ by Frank La Rue available at http://www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf.

[9] Jack M. Balkin, ‘Old-School/New-School Speech Regulation”, (2014) 127 Harvard Law Review 2296.

[10] Freiwald, Susan. “Comparative Institutional Analysis in Cyberspace: The Case of Intermediary Liability for Defamation.” Harv. JL & Tech. 14 (2000): 569; MacKinnon, Rebecca, et al. Fostering Freedom online: the role of internet intermediaries. UNESCO Publishing, 2015.

[11] Seth F. Kreimer, ‘Censorship by Proxy: The First Amendment, Internet Intermediaries, and the Problem of the Weakest Link’ (2006) 155 (11) U. Pa. L. Rev 2-33.

[12] Chinmayi Arun & Sarvjeet Singh, NoC Online Intermediaries Case Studies Series: Online Intermediaries in India 24, 25 (2015), available at http://ccgtlr.org/wp-content/uploads/2015/02/CCG-at-NLUD-NOC-Online-Intermediaries-Case-Studies.pdf (last visited on July 4, 2015).

[13] (2013) 12 SCC 73 (India).

[14] “European Commission and IT Companies announce Code of Conduct on illegal online hate speech”(Press Release) , available at http://europa.eu/rapid/press-release_IP-16-1937_en.htm.

[15] Ibid.

(Rishab is a students at Jindal Global Law School and currently an intern at CCG)

Supreme Court to pronounce judgment on Criminal Defamation tomorrow

Tomorrow in Supreme Court’s Room no. 4 at 10.30 am a bench of Justices Dipak Misra and Prafulla Pant will pronounce the judgment regarding the constitutional validity of criminal defamation (Sections 499 and 500 of IPC and section 199 of CrPC).

The CCG Blog

A Supreme Court bench of Justices Dipak Misra and Prafulla Pant is hearing a set of at least thirty petitions challenging the constitutional validity of criminal defamation (Sections 499 and 500 of IPC and section 199 of CrPC).

The summary of hearings from the first six days can be found here.

View original post

SC hears the Aadhaar #NotAMoneyBill Challenge

A Supreme Court bench of the Chief Justice and Justices R. Banumathi and UU Lalit took up a petition by Mr. Jairam Ramesh, Member of Parliament (Rajya Sabha) challenging the certification of the Aadhaar Act as a money bill by the Lok Sabha Speaker today.

Senior Advocates Mr. P. Chidambaram, Mr. Kapil Sibal and Mr. Mohan Parasaran represented the petitioner and the Attorney General and Additional Solicitor General Ms. Pinky Anand represented the Government.

Mr. Chidambaram stated that the Aadhaar Bill is not a money bill, as it does not meet the criteria laid down in Article 110 of the Constitution. The bench inquired whether the question of certification is open to judicial review? Mr. Chidambaram stated that it is their stand that it is open for review whereas the AG stated that it was not open for judicial review.

The AG also raised an objection to the petitioner filing the petition under Article 32 of the Constitution. Mr. Chidambaram stated that rule of law is a fundamental right and if that is violated by the Parliament a cause of action arises. He also stated that there are judgments of the Court, which state that if there is a substantial question of constitutional law arising in a particular case a person can come to the Court under Article 32.

The AG reiterated his objection on the locus and the Chief Justice asked if the AG was saying that the rule of law is not a fundamental right? The AG stated that the rule of law is a fundamental right, however, the definition of rule of law is too broad and cases are admitted for violation of rule of law there will not be any difference between the remedies provided by Articles 32 and Article 226 of the Constitution. He stated that a matter relating to seniority of a person may involve a question of rule of law but the person cannot approach the court under Article 32.

Mr. Chidambaram stated that equating the current case to a case of seniority would be making a caricature of the argument. He stated that both the houses of the Parliament have equal status and power and in this case the decision of the presiding officer of one house has deprived the other house of its powers.

He added that in the present case the presiding officer of the Lok Sabha violated the basic rule of law and this is too grave a matter to be rejected on the argument of locus. He stated that there is a clear violation of Article 14 among others, which contains the rule of law.

On the question of judicial review, the AG stated that if a bill is certified by the Speaker as money bill that decision cannot be examined. In response, Mr. Chidambaram argued that immunity extends only to matters of procedural irregularity and not an illegality citing the Raja Ram Pal case of 2007. (For a detailed analysis of why the Supreme Court has the power to judicially review the Speaker’s decision to classify the AADHAR Bill as a Money Bill please see: Aadhaar Act as a Money Bill – Judicial Review of Speaker’s Determination Concerning Money Bills)

The Court asked both the parties to submit a list of relevant cases and listed the matter for 20th July.