Caught on Camera: India is Woefully Unprepared for Facial Recognition Technology

Varsha Rao

The current discourse surrounding the use of facial recognition technology in surveillance operations, prompted by the recent ban in San Francisco, is populated by cost-benefit analyses – the cost of privacy and freedom of assembly versus the benefit of nabbing criminals and identifying missing children.

The problem with such an approach is that it pits individual rights and freedoms against the State’s duties without allowing for space to address shortcomings. It creates a shallow profile of the person opposing or favouring the technology. If you value your privacy and oppose real-time surveillance, you must be soft on crime. If you are willing to implement large-scale surveillance to track the thousands of children that go missing every year, you must be indifferent to the prejudices faced by minority communities.

The country is at a point in time where widespread use of facial recognition technology by law enforcement officials (such as Punjab Artificial Intelligence System and Chennai’s FaceTagr) and private companies (Paytm) is taking place without any legislation to regulate its implementation. On one hand, that is an alarming reality to face. On the other hand, since nothing has been set in stone yet, there is a sizeable opportunity to develop regulations that will wholeheartedly attempt to address the spectrum of citizen concerns.

The Obvious Red Flags in India’s Social Fabric

According to a deputy police commissioner in Chennai, to avoid misuse of their facial recognition technology, police personnel have been instructed to refrain from using the application unless they find a person suspicious. For a country steeped in caste-based and communal prejudices, we cannot brush aside the extent to which the concept of a “suspicious person” can be corrupted at the individual policeman’s level. There is ample evidence from the United States to warn us of biases that manifest in the form of tragic police killings.

India continues to live under the shadows of the Criminal Tribes Act of 1931 – the predecessor of the Habitual Offenders Act, 1952 – which links unfounded allegations of hereditary criminality to certain marginalized communities. Furthermore, a socioeconomic study of prisoners on death row in India (published in 2016) yielded distressing insights into the criminal justice system ~35% of death row inmates belong to the OBC community and 25% to the SC/ST community. Religious minorities were found to comprise 21% of death row prisoners. This is not to say that purely direct discrimination is at play on the part of the police force and criminal justice system, but it is enough to hint at subconscious prejudices and microaggressions permeating India’s justice delivery mechanisms.

Lessons from the DNA Technology Bill

Legal expert Usha Ramanathan had pointed out in her dissent note on The DNA Technology (Use and Application) Regulation Bill, 2018 that the proforma in use by Government agencies, such as the Centre for DNA Fingerprinting and Diagnostics (CDFD), inquires about the caste of the person whose DNA is being collected. Instances such as this play right into our suspicions about prejudiced profiling.

Important observations can also be drawn from the test carried out by American Civil Liberties Union (ACLU) of Amazon’s facial recognition tool “Rekognition” and its aftermath. The test revealed that the software had incorrectly singled out 28 members of the U.S. Congress as people who have been arrested for a crime. The false matches in the test disproportionately comprised of people of colour – nearly 40% of Rekognition’s false matches.

In response, Amazon highlighted that the confidence threshold of 80% used by ACLU is appropriate for general use cases (such as identifying celebrities on social media) but not for public safety use, thereby leading to false positives. The recommended confidence threshold of 99% resulted in a misidentification rate of zero in tests conducted by Amazon. What is interesting to note is that in 2017, Amazon’s blog post demonstrated the use of Rekognition to identify persons of interest for law enforcement using a confidence threshold of 85% (indicated by the variable ‘faceMatchThreshold’ in the code). After the findings of the ACLU study, Amazon went from recommending 95% or higher as the confidence threshold to 99%.

When the creators of the software itself cannot keep its numbers straight, how much confidence can we realistically repose in our law enforcement officials?

The false positives generated by DNA evidence can shed some light on the potential consequences of using new technology such as facial recognition. The science of DNA profiling is a probabilistic and statistical study – first, the likelihood of the collected DNA belonging to the suspect is analysed, followed by the likelihood that the DNA sample could belong to someone else in a given population. Unfortunately, when DNA profiling is sold as the be-all-end-all of nabbing criminals, statistical nuances are left out of the argument. Unless police personnel and judges are trained to fully comprehend the evidence placed before them, it diminishes the value of incorporating science into the criminal justice system and neutralizes any attempt at doing away with wrongful convictions.

The Way Forward

The point of highlighting the potential and actual misuse of facial recognition technology is not to demonize tech companies, hold them accountable for the failures of the State or to ignore the benefits of such technology in cases of missing children and human trafficking. The point, instead, is to create room for improvement and minimize the infringements of emerging technology on fundamental human rights. Developers should not be heralding the benefits while wilfully ignoring feedback and minimizing the appearance of drawbacks.  

If they continue to do so, then they are truly missing the point.

Instead, tech companies need to insist on the establishment of a regulatory framework to govern the use of facial recognition technology on public and commercial premises. If not for the sake of human rights and freedoms, then at least to avoid outright bans on the technology and ensure stability in future policies.

Since India is home to the world’s largest biometric database – the much coveted Aadhar, the country is in a unique position to assume a leadership role when it comes to regulating facial recognition technology. The Government cannot pass the buck of responsibility to the tech companies as it has attempted to do with the problem of ‘fake news’.

Imposing standards of oversight, limiting function creep, exploring issues of privacy and consent, protecting databases from cybersecurity breaches and the redressal of complaints – all fall within the ambit of Government control. Stakeholder consultations with tech companies and civil society will ensure a richness of debate and hopefully, incorporate the voices of the marginalized as they have the most to lose in a surveillance-heavy environment.

The path that India chooses to follow in relation to facial recognition technology must firmly be in the opposite direction of the Chinese government, which has been allegedly deploying such technology to keep tabs on the oppressed Muslim Uighur population. Hopefully, the next Government in power has the political will to pick up the slack in harmonizing effective protections for rights and freedoms with the benefits of emerging technology. But for now, if you happen to spot a camera trained on you in a marketplace such as Chennai’s T. Nagar, don’t forget to smile and wave.

(Varsha is a researcher with the Centre for Communication Governance at National Law University Delhi.)

Will Fake News Decide the World’s Largest Election?

Varsha Rao


Social media platforms like Facebook and Twitter have been hauled up by many an authority for their role as conduits in the dissemination of ‘fake news’, including the United States Senate and India’s Parliamentary Standing Committee on Information Technology.

In the run-up to the Lok Sabha Elections, social media and messaging platforms have put in place strategies to check attempts at spreading misinformation.

To prevent voter suppression (acts aimed at reducing voter turnout), Facebook bans misrepresentations about the voting process, such as claims that people can vote online. The  launch of political advertisement archives in India by Facebook, Google and Twitter has also contributed to the fight against misinformation campaigns.

Fact-checkers are the second line of defence after artificial intelligence and machine learning has had its turn in identifying potential pieces of false news. The judgement and research ability of the fact-checker determines the rating that the highlighted content will receive as well as its priority on platforms, such as on Facebook’s News Feed.

However, entities involved in misinformation campaigns remain undeterred.



Cutting Off a Hydra’s Head: Challenges Befalling Social Media Intermediaries

The existence of multiple forms of social media and competing platforms allows malicious actors to engage in ‘platform shopping’ and utilize methods which throw up fewer obstacles.

An example of this is podcasts or series of audio files. Scrutinizing the content of podcasts for hate speech and misinformation is much more difficult than identifying buzzwords in articles. When platforms disseminating podcasts do not have transparent policies on taking down content, there is no guarantee that flagging a podcast for problematic content will contain its reach. Furthermore, since podcasts are in a nascent stage of popularity, platforms may not have the resources or funding to engage in extensive fact-checking or hire third party fact-checkers.

The emergence of ‘deepfakes’ or artfully doctored photos and videos has contributed to the flow of misinformation on social media as well. Lack of awareness regarding the existence and popularity of ‘deepfakes’ along with the difficulty in spotting manipulations in the footage exacerbates its ability to influence the target audience.

Social media companies are well aware that they are going up against determined actors with the capacity to generate creative solutions on the fly. One such example was observed by an American reporter on a chat channel, Discord. When a Twitter user complained about Twitter’s proactive measures in deleting accounts connected to voter suppression attempts, another user suggested the use of Snapchat filters on photos found online when creating a fake account to evade reverse image searches.

It does not help that certain challenges faced by social media companies have no immediate solutions. In its press releases, Facebook has highlighted the scarcity of professional fact-checkers worldwide, the time it takes for a complex news item to be scrutinized and the lack of meaningful data in local dialects to aid machine learning.

Furthermore, while solutions have been implemented by social media companies in good faith, they have been shown to remain unsuccessful in tackling the problem as a whole. A reporter for The Atlantic drew attention to a loophole in Facebook’s Ad Library authentication process, an otherwise effective dragnet in a sea of insidious advertising. By setting up a limited liability company to act as the publisher of the ad, special interest groups can obscure their identity and continue to sponsor ads on Facebook. The inability to predict users’ behavioural tendencies may also lead to the failure of the solution, such as in the case of WhatsApp where the labelling of forwarded messages may not encourage the recipient to question the legitimacy of the message if the recipient has faith in the credibility of the sender.

While scrutinizing the strategies offered by social media platforms and other intermediaries, it is important to keep in mind that the problem of ‘fake news’ is not a new phenomenon. The introduction of the printing press in the 15th century also unleashed a wave of ‘fake news’ regarding witches and religious fanaticism which would be printed alongside scientific discoveries. Thus, while social media may have amplified its reach – much like a microphone does in the hands of a speaker – it is ultimately the individual spewing vitriol that is the true culprit. The burden of generating solutions cannot be solely borne by the intermediaries.


Is the Government’s Heart in the Right Place: Misplaced Solutions for an Insidious Problem

Unfortunately, as part of their contribution to curb the dissemination of ‘fake news’, the Government has made scarce headway. In April 2018, a directive was issued by the Ministry of Information and Broadcasting stating that the accreditation of journalists found to have generated or circulated ‘fake news’ would be suspended for a time period to be determined according to the frequency of violations and would be cancelled in case of a third violation. The guidelines were immediately withdrawn on the direction of the Prime Minister’s Office after the Government was heavily criticized by journalists and media bodies for attempting to muzzle the free press.

In December 2018, the Ministry of Electronics and Information Technology published the draft Information Technology [Intermediaries Guidelines (Amendment) Rules], 2018, which requires – inter alia – that intermediaries enable tracing of the origin of information and deploy automated tools to proactively identify and remove unlawful content. WhatsApp, a platform with end-to-end encryption, took a stand against breaking encryption and pointed to privacy and free speech concerns to justify their position.

As countries attempt to block the dissemination of ‘fake news’ on the internet and regulate the flow of information on social media platforms, it is imperative to ensure that overbroad definitions and strategies do not end up promoting political censorship.

China’s crackdown on ‘online rumours’ since 2013 is an example of the State controlling information flow. Not only must ‘rumours’ – including content undermining morality and the socialist system – be removed by social network operators, but also their publication could result in a jail term of 3 years for the creator. The licenses required by social media networks to operate in China may be held hostage if their interpretation of ‘rumours’ does not align with the Chinese authorities. This incentivizes overly cautious intermediaries to block or report content that seems ‘fake’ by the Government’s standards, thus leading to collateral censorship.


‘What Is’ versus ‘What Could Have Been’: The Pitfalls of Election Campaigning  

The lack of significant engagement and progress on the ‘fake news’ and misinformation front is certainly a cause for concern as it points to a lack of political will.

The misuse of social media and messaging platforms by the ruling party as well as the Opposition has been widely reported by news outlets. BJP President allegedly told the party’s IT cell volunteers that the 32 lakh-strong WhatsApp groups allow the BJP to deliver any message to the public, even if it is fake. Last month, Facebook took down pages connected to the Congress IT cell as well as an IT firm behind the NaMo app for coordinated inauthentic behaviour and spam. WhatsApp’s head of communications has also interacted with political parties to highlight that WhatsApp is not a broadcast platform and accounts engaging in bulk messaging will be banned.

For political parties, there is much to gain by manipulating public opinion in a country where elections are tightly-contested along narrow margins, and election results have a long-lasting impact on the intricate fabric of national identity. Back in 2013, the Internet and Mobile Association of India (IAMAI) had gathered from a Social Media survey conducted in 35 Indian cities that the votes of only 3-4% of social media users could be swung. Of course, this was before the 2016 U.S. Presidential Elections which saw social media disinformation campaigns being executed with a renewed vigour.

As a starting point, political parties could have agreed to refrain from executing misinformation campaigns and instead, opted to encourage healthy debate based on verifiable facts to influence the electorate. Mud-slinging and propaganda campaigns are tactics that could potentially win elections. However, political candidates cannot ignore the lethal consequences of ‘fake news’ in India and carry on as if it is business as usual, especially when ‘fake news’ has become a life-and-death issue.

In the run-up to its federal elections in 2017, major political parties in Germany entered into a ‘gentleman’s agreement’ to disregard information leaked as a result of cyberattacks instead of exploiting it. An agreement by Indian political parties on the ethics that ought to govern social media use would have underscored the same spirit.

Instead of attempting to increase the burden on intermediaries, the Government could also have undertaken extensive digital literacy campaigns to build resilience against attempts at manipulation, be it domestic or foreign. The campaigns could have been structured to highlight the techniques by which false information is propagated to manipulate the psychology of the voter.

Social media platforms, political parties and the Election Commission form a trinity that shares the responsibility of protecting the authenticity of content informing a voter’s choice. While the degree of responsibility may be different, without collaboration, the goal will remain unachievable. The shortcomings of the political parties do not absolve the social media intermediaries of their responsibility. It took Twitter until half of polling had been completed to launch an anti-voter suppression feature on the microblogging platform. There have also been multiple instances of ‘fake news’ being taken down on other social media platforms but remaining in circulation on Twitter.

The impact of misinformation campaigns on the Lok Sabha elections will be uncovered only once the elections come to an end. The best-case scenario it that it has a negligible impact on the election result. The worst-case scenario? The influence is so pervasive that we will follow in the footsteps of the U.S. and take a minimum of two years to uncover its reach.

Regardless of what ultimately happens, perhaps there is one thing we can all agree on – not enough has been done to protect this “festival of democracy” from being manipulated.


(Varsha is a researcher with the Centre for Communication Governance at National Law University Delhi.)

Securing Electoral Infrastructure: How Alert is India’s Election Chowkidaar?

Varsha Rao

With the publication of Special Counsel Robert Mueller’s much-awaited report on Russian interference in the United States Presidential Elections of 2016, the threat of hacking and misinformation campaigns to influence elections is taking centre-stage yet again. Closer to home, the discussion has become more pertinent than ever before. In a democratic process of gigantic proportions, 900 million Indians across 543 constituencies are expected to cast their vote in 7 phases to elect a Government for the next five years.

The gravity and significance of the ongoing General Elections to the Lok Sabha thus begs the question – how susceptible is the world’s largest democracy to cyber interference?

Interfering in an election in the digital age involves a two-pronged attack – firstly, by influencing the political inclination of the electorate via misinformation campaigns on social media platforms, and secondly, by manipulating the electoral infrastructure itself. This article will focus on the latter, more specifically, the infrastructure and processes administered by the Election Commission of India.

Voter Registration Databases and Election Management Systems (EMS)

Unfettered access to voter registration databases arms malicious actors with the ability to alter or delete the information of registered voters, thereby impacting who casts a vote on polling day. Voter information can be deleted from the electoral rolls to accomplish en-masse voter suppression and disenfranchisement along communal lines in an already polarized voting environment. The connectivity of voter databases to various networks for real-time inputs and updates make them highly susceptible to cyberattacks.

The manipulation of election management systems (EMS) can have an even wider impact on the electoral process. Gaining access to the Election Commission’s network would be akin to creating a peephole into highly confidential data ranging from deployment of security forces to the tracking of voting machines.

Election Commission staff can be targeted via phishing attacks in a manner similar to the cyberattacks executed during the 2016 U.S. Elections. Classified documents of the U.S. National Security Agency (NSA) as well as Special Counsel Robert Mueller’s report confirm that hackers affiliated with the Russian government targeted an American software vendor enlisted with maintaining and verifying voter rolls. Thereafter, posing as the vendor, the hackers successfully tricked government officials into downloading malicious software that creates a backdoor into the infected computer.

The Election Commission has made proactive attempts to improve the cyber hygiene of its officials by conducting national and regional cybersecurity workshops and issuing instructions regarding vigilance against phishing attacks. Furthermore, Cyber Security Regulations have been issued to regulate the officers’ online behaviour. A Chief Information Security Officer (CISO) was appointed in December 2017 at the central level and Cybersecurity Nodal Officers have been appointed in at the State-level.

The Election Commission has also addressed spoofing attempts by taking down imposter apps from mobile phone app distribution platforms. According to newspaper reports, the Election Commission has carried out a third-party security audit of all poll-related applications and websites, and enabled Secure Sockets Layer (SSL) on the Election Commission website to encrypt information exchanged between a user’s browser and the website.

There is no doubt that cybersecurity risks are constantly evolving, and it remains imperative for the Election Commission to conduct systematic and periodic vulnerability analyses in collaboration with security auditors to update Election Commission systems and software.

Electronic Voting Machines

An EVM is made up of two units – a Control Unit and a Balloting Unit, linked by a five-metre long cable. The Presiding/Polling Officer uses the Control Unit to release a ballot. This allows the voter inside the voting compartment to cast their vote on the Balloting Unit by pressing the button labelled with the candidate name and party symbol of their choice. An individual cannot vote multiple times as the machine is locked once a vote is recorded, and can be enabled again only when the Presiding Officer releases the ballot by pressing the relevant button on the Control Unit.

While the Election Commission has reiterated time and again that EVMs are tamper-proof, the machines have come under criticism from security researchers and computer scientists. To defend the integrity of EVMs, the Election Commission frequently cites the simplistic design of the machine. The EVMs are battery-operated in order to be functional in parts of the country that do not have electricity access. Additionally, they are not connected to any online networks nor do they contain wireless technology, thereby mitigating the possibility of remote software-based attacks. While these factors certainly reduce the potential for EVM hacking, they do not justify the Election Commission’s unshakeable belief that EVMs are infallible.

The most explosive demonstration of EVMs being susceptible to hacking attempts was carried out all the way back in 2010 by a Hyderabad-based technologist, Hari K. Prasad in collaboration with J. Alex Halderman, an American computer science professor and Rop Gonggrijp, a hacker who campaigned to decertify EVMs in the Netherlands.

Various personnel interact with the EVM, right from the beginning of the supply chain to the officials and staff responsible for its storage and security before and after polling. In a paper published by Hari Prasad and his team, two methods of physical tampering were tested and demonstrated. The first method is to replace the Control Unit’s display board which is used during the counting process to show the number of votes received by candidates. The dishonest display board, on receiving instructions via Bluetooth, would have the ability to intercept the vote totals and display fraudulent totals by adjusting the percentage of votes received by each candidate. The second method involves attaching a temporary clip-on device to the memory chip inside the EVM to execute a vote-stealing program in favour of a selected candidate.

The physical security of the EVM takes on manifold importance in light of the above. The Election Commission has strict procedures in place to transport and store the machines, employing GPS and surveillance technology. Storage spaces known as ‘strong rooms’ having a single-entry point, double lock system and CCTV coverage are utilised. However, there have been frequent news reports about cases of EVM theft, strong room blackouts as well as unauthorized access.

The Election Commission has argued that since mock polls are conducted before official polling commences, any malfunctions or tampering attempts will be detected before it can impact the electoral process. However, this countermeasure does not address the possibility of attackers programming their tampering devices to kick into gear only after the EVM has recorded a set number of votes, thereby skipping over any mock poll entries.

Furthermore, while the source-coding or the writing of the software onto the EVM chip is done by Indian public sector undertakings (PSUs), the microchips themselves are imported from the United States and Japan. Since the EVM chip is a one-time programmable chip, it can neither be read, copied nor overwritten. The benefit of this feature is that they cannot be re-programmed by malicious actors. However, the masking also has a downside – in the event that any vulnerabilities are inserted into the chip or source code during the movement of the machine components along the supply chain, it may not be possible to detect the vulnerability.

Introducing a Voter Verifiable Paper Audit Trail (VVPAT) system was widely touted as second layer of verification to catch any EVM malfunctions. It was only at the insistence of the Supreme Court that the Election Commission agreed to roll out EVMs with VVPATs for the ongoing General Elections.

When a vote is cast, the battery-operated VVPAT system prints a slip containing the serial number, name and symbol of the candidate, which is available for viewing through a transparent window for a few seconds. Following that, the slip falls into a sealed drop box.

An effective VVPAT audit is an important solution to the vulnerabilities plaguing EVMs. The Election Commission’s procedure for VVPAT audit involved counting of VVPAT slips in one polling booth per Assembly segment for the General Elections. The Supreme Court had to intervene again – at the insistence of Opposition parties – for the Election Commission to increase the audit from one EVM to five per Assembly segment. The Court did not accept the Opposition parties’ plea to have 33-50% votes verified.

The call for extensive VVPAT slip audits has been an ongoing battle, with bureaucrats, politicians and experts on the frontlines. Former bureaucrats had written to the Election Commission to increase the audit sample size to 50 machines per 1 lakh booths instead of 5-6 machines. A former Chief Election Commissioner has proposed that the two runners-up in a constituency may be the given the option to randomly select two EVMs each for a VVPAT slip audit – a procedure similar to the Umpire Decision Review System in cricket. Another proposed method known as the Risk Limiting Audit requires the ballots to be audited until a pre-determined statistical threshold of confidence is met.

The resistance displayed by the Election Commission to introducing VVPAT slip audits as well as expanding the sample size of the audits is alarming. The Chief Justice of India even reprimanded the Election Commission for “insulat[ing] itself from suggestion for improvement”. Unsurprisingly, the Court had to reassure the Election Commission that in making recommendations to improve the electoral process, it was not casting aspersions on the functioning of the body.

While it is commendable that the Election Commission has embraced the implementation of technology like EVMs in the electoral process, it is becoming clear that it has not incorporated the tradition of vulnerability research and software patching to prevent further exploits. Security researchers must be provided time and unfettered access to test the efficacy and security offered by EVMs. Hacking challenges should not be restricted to EVM replicas or superficial tinkering on the external body of the EVM.

It is understandable for an authority like the Election Commission to focus on protecting the integrity of the institution as well as the election infrastructure. However, pointing out flaws in the EVM technology is not equivalent to an attack on the institution of the Election Commission. While the entire process of elections is built around trust – be it trust in the method of casting votes or trust in the authority tabulating the votes – it is the responsibility of those in whom the trust of the electorate is reposed to ensure transparency at every stage and welcome public scrutiny, especially when new and complex technology is being employed.

(Varsha is a researcher with the Centre for Communication Governance at National Law University Delhi.)

Celebrating One Year of the Puttaswamy Judgment (August 24, 6.00 pm, IIC)

Celebrating One Year of the Justice K.S. Puttaswamy v. Union of India Judgment

August 24, 2018

6.00 pm onwards

organized by

Indian Council for Research on International Economic Relations


Centre for Communication Governance at National Law University Delhi


Conference Hall – 2 | India International Centre | Max Mueller Marg | New Delhi



In a landmark decision on August 24th last year, a nine-judge bench of the Supreme Court unanimously upheld the fundamental right to privacy.

More recently, a committee headed by Justice B.N. Srikrishna submitted a report and a draft bill on data protection. Public Comments on the bill are due by early next month. The Supreme Court’s judgment on the Aadhaar challenge is imminent. There have also been other developments in this context such as RBI’s data localisation directive, the DNA profiling bill, the draft information security in health care bill, data localisation provisions in the e-commerce policy and the government’s recently withdrawn proposal to create a social media communications hub.

To commemorate the anniversary of the judgment and discuss the recently released Data Protection Bill, and related issues we are hosting this discussion on privacy and data protection.

Timings Programme
6.00 – 6.15 pm Tea & Coffee
6.15 – 6.20 pm Initial Remarks
6.20 – 6.40 pm State of Privacy in India & the Challenges to Realising Puttaswamy’s Promise

Dr. Usha Ramanathan, Independent Law Researcher

6.40 – 7.30 pm Data Protection for a Free and Fair Digital Economy

The recently released draft data protection framework recognises the need to balance privacy and a free and fair digital economy. It articulates some of the benefits of big data and encourages its growth. However, it has been argued that compliance with such a framework will require the current business models to change. Additionally, stringent provisions mandating the jurisdiction for processing of personal data, and wide discretion given to the central government, and regulatory authority raise questions of its impact on the second largest online market in the world, home to nearly 500 million active Internet users and business located in it.

Moderated by: Mansi Kedia, Consultant, Indian Council for Research on International Economic Relations (ICRIER)

Madhulika Srikumar, Associate Fellow, Observer Research Foundation

Malavika Raghavan, Project Head – Future of Finance Initiative, Dvara Research

Nehaa Chaudhari, Public Policy Lead, TRA Law

Smriti Parsheera, Technology Policy Researcher, National Institute of Public Finance and Policy (NIPFP)

7.30 – 8.20 pm Legacy of the Justice K.S. Puttaswamy v. Union of India Judgment

The court pronounced a landmark judgment last year, however it still needs to be examined whether judicial and legislative developments in India over the past year have upheld the principles enumerated in it. This includes the proposed data protection framework, and ongoing hearings on the right to be forgotten, Aadhaar and Section 377 and adultery, among others.

Moderated by: Apurva Vishwanath, Special Correspondent, ThePrint

Kritika Bhardwaj, Lawyer, Supreme Court of India

Shweta Mohandas, Policy Officer, Centre for Internet & Society

Smitha K. Prasad, Civil Liberties Lead, Centre for Communication Governance at National Law University Delhi

Ujwala Uppaluri, Lawyer, Supreme Court of India

8.20 pm onwards Dinner

11 Indian States have Shutdown the Internet 37 times since 2015

Mobile Internet services have been suspended in Kashmir for the past 87 days. There has been a sharp increase in both the frequency and duration of ICT shutdowns in the past two years. We have been tracking ICT shutdowns since 2012 as part of our research for the Freedom on the Net- India Report (2014 & 2015).

In 2012 there was only one incident of ICT shutdown (three including the Republic and Independence days). The Jammu & Kashmir government blocked telecom services in order to prevent users from uploading or downloading the film Innocence of Muslims. This is a rare instance of a shutdown in which the government order is publicly available. Most of the Internet shutdowns have been without any procedural transparency.

Jammu & Kashmir has been suspending mobile Internet on the Republic and Independence Days since 2005, with Republic Day 2015 being an exception. In 2013 there were three instances of ICT shutdowns (four including Republic Day) – all in the state of Jammu & Kashmir.

In 2014, Jammu & Kashmir blocked ICT services four times (including on Republic Day and Independence Day) and the State of Gujarat once. Gujarat ordered the block after 2 people were stabbed in Vadodara in clashes between two communities subsequent to the circulation of an image on Facebook which was considered offensive to Islam.

Since 2014 there has been a massive spike in the incidents of ICT shutdowns in India. We found that 11 Indian states have shutdown the Internet 37 times since 2015 with 22 of those instances in the first nine months of 2016.

We have written extensively on Internet Shutdowns in the past year. For an analysis of the legal issues in case of Internet shutdowns please see:

Demarcating a safe threshold

The Anatomy of Internet Shutdowns – I (Of Kill Switches and Legal Vacuums)

The Anatomy of Internet Shutdowns – II (Gujarat & Constitutional Questions)

The Anatomy of Internet Shutdowns – III (Post Script: Gujarat High Court Verdict)

Internet Shutdowns: An Update

EU Code of Conduct on Countering Illegal Hate Speech Online: An Analysis

By Rishabh Bajoria

The Code

On 31st May, the European Commission (EC) announced a new Code of Conduct for online intermediaries. This Code was formulated after mutual agreement between the EC and Facebook, Microsoft, Google (including YouTube) and Twitter.[1] It targets the prompt removal of hate speech online through intermediaries. The EC stated:

While the effective application of provisions criminalising hate speech is dependent on a robust system of enforcement of criminal law sanctions against the individual perpetrators of hate speech, this work must be complemented with actions geared at ensuring that illegal hate speech online is expeditiously acted upon by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame.”[2]

It later clarifies that a notification must not be “insufficiently precise” or “inadequately substantiated”. Intermediaries are obliged “to review the majority of valid notifications” in “less than 24 hours and remove or disable access” to the content. They must review the notifications against the touchstone of their community rules and guidelines, and “national laws”, wherever necessary.


The Code is understood to be in response to rising anti-Semitic and pro-Islamic State commentary on social media. Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality, said, “The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred.[3]

It is noteworthy that the intermediaries are American. This could be a way to avoid any jurisdictional conflict. For example, in Licra et UEJF v Yahoo! Inc and Yahoo! France, Yahoo! refused to comply with a French Court’s order. The order imposed liability on Yahoo! for its failure to disable access to sale of Nazi memorabilia on its website. This was a crime in France. However, Yahoo! contended that because its servers were located in the United States, the order was inapplicable. Subsequently, the U.S. District Court for the Southern District of New York in Yahoo! Inc. v. La Ligue Contre Le Raisme et L’Antisemitisme held Yahoo! to be a mere distributor. Hence, it could only be held liable if it had notice of the content.[4] This Code will supplement Articles 12-14 of the E-Commerce Directive 2000/31/EC. These Articles preclude intermediaries from liability if they disable content “expeditiously”, after receiving a “notice” of it. However, standards are not provided for “expeditious” or “notice. This Code clarifies these ambiguous terms for the intermediaries, which are otherwise defined by domestic legislatures[5]. Moreover, because such intermediaries have agreed to abide by the E-Commerce Directive and the Code of Conduct, such a jurisdictional issue will not arise.


This Code forces intermediaries to judge the legality of content. Once intermediaries are notified of the content, they are obliged to investigate and determine if the speech should be deleted. Twitter’s Head of Public Policy for Europe, Karen White, commented: “Hateful conduct has no place on Twitter and we will continue to tackle this issue head on alongside our partners in industry and civil society. We remain committed to letting the Tweets flow. However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.[6] Such a notice and takedown regime is problematic because this distinction is not always “clear”. There remains no universal consensus on the definition of hate speech. To evaluate if speech comes under this category, Courts across jurisdictions look at a number of factors:

  1. Severity of the speech
  2. Intent of the speaker
  3. Content or form of the speech
  4. Social context in which the speech is made
  5. Extent of the speech (its reach and the size of its audience)
  6. Likelihood or probability of harm occurring[7]

The last two criteria are not analysed for speech which incites hatred. Hate speech, per se, is an inchoate crime. These factors are analysed cumulatively. Courts look to balance the value of the speech against the State’s positive obligations of maintaining public order or protecting the rights of others. Former UN Special Rapporteur for Freedom of Expression Frank La Rue has argued that private intermediaries should not be forced to carry out censorship. They are not equipped to account for the various factors involved in determining the legality of speech.[8] Unlike a judiciary, evaluations by private intermediaries are often opaque. They provide none of the legal safeguards a trial does, such as a right to appeal.[9] The mandate to censor speech within 24 hours of notification exacerbates this problem.

Proponents of this code might argue that intermediaries engage in self-censorship according to their Community Guidelines in status quo.[10] Therefore, an extension of this obligation to domestic legislation is not harmful. However, intermediaries are profit oriented private corporations. The legal obligation placed by the code of conduct is accompanied by liability if breached. This threat of liability will cause them to err on the side of caution and over censor speech. Professor Seth Kreimer, a Constitutional and Human Rights Law expert, argues that intermediaries know that potential liability will outweigh additional revenue offered by a user.[11] This is likely to have a chilling effect on online speech[12]. Hence, the Indian Supreme Court in Shreya Singhal v Union of India[13] rejected the “private notice and takedown” standard. It held that an intermediary will only be liable if it fails to comply with a judicial order stating the illegality of content.

For example, assume someone posts a controversial tweet. Presumably, this would be flagged by users for removal. Even if the notice is “valid”, and not “insufficiently precise”, Twitter will still have to investigate this before taking it down. In status quo, big corporations like Twitter usually have a legal team for this. However, this legal team will have to evaluate, within 24 hours, if the speech is inciting violence or hatred. For this, it will have to analyse its content and severity, the intent of the speaker and the social context. It will also have to scrutinize the causality between the speech and potential violence. This is a nearly impossible task. Moreover, it does not know if the judiciary will render the same verdict. So, if it continues to disseminate the speech in good faith, and the judiciary later deems it illegal, it can be held liable. This threat will make it remove speech, wherever the “distinction between freedom of expression and conduct that incites violence[14] is not “clear[15]. In the face of millions of such requests, intermediaries cannot be expected to make a sound legal evaluation. As a result, society may be deprived of potentially valuable speech.

Thus, this Code effectively mandates private censorship. Intermediaries will not be able to make nuanced evaluations of whether the speech incites hatred or violence within 24 hours. However, they can be liable, even if they do not delete content in good faith if a Court later finds it impermissible. The fear of this liability will make intermediaries err on the side of caution and over-censor. Hence, this Code is a recipe for a chilling effect online. Thus, while preventing terrorist propaganda is a legitimate aim, this response will disproportionately restrict freedom of speech and expression online.

[1] Code of Conduct on Countering Illegal Hate Speech Online, available at; “European Commission’s Hate Speech Deal With Companies Will Chill Speech”, available at

[2] “European Commission and IT Companies announce Code of Conduct on illegal online hate speech”(Press Release) , available at,.


[4] Omer, Corey. “Intermediary Liability for Harmful Speech: Lessons from Abroad.” Harv. J. Law & Tec 28 (2014): 289-593.

[5] Verbiest, Thibault, Gerald Spindler, and Giovanni Maria Riccio. “Study on the liability of internet intermediaries.” Available at SSRN 2575069 (2007).

[6] “European Commission and IT Companies announce Code of Conduct on illegal online hate speech”(Press Release) , available at

[7] Toby Mendel, Study on International Standards Relating to Incitement to genocide or Racial Hatred, a study for the UN Special Advisor on the prevention of Genocide, April 2006, available at; “Towards an interpretation of Article 20 of the ICCPR: Thresholds for the prohibition of incitement to hatred”, available at

[8] HRC, ‘Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression’ by Frank La Rue available at

[9] Jack M. Balkin, ‘Old-School/New-School Speech Regulation”, (2014) 127 Harvard Law Review 2296.

[10] Freiwald, Susan. “Comparative Institutional Analysis in Cyberspace: The Case of Intermediary Liability for Defamation.” Harv. JL & Tech. 14 (2000): 569; MacKinnon, Rebecca, et al. Fostering Freedom online: the role of internet intermediaries. UNESCO Publishing, 2015.

[11] Seth F. Kreimer, ‘Censorship by Proxy: The First Amendment, Internet Intermediaries, and the Problem of the Weakest Link’ (2006) 155 (11) U. Pa. L. Rev 2-33.

[12] Chinmayi Arun & Sarvjeet Singh, NoC Online Intermediaries Case Studies Series: Online Intermediaries in India 24, 25 (2015), available at (last visited on July 4, 2015).

[13] (2013) 12 SCC 73 (India).

[14] “European Commission and IT Companies announce Code of Conduct on illegal online hate speech”(Press Release) , available at

[15] Ibid.

(Rishab is a students at Jindal Global Law School and currently an intern at CCG)

Supreme Court to pronounce judgment on Criminal Defamation tomorrow

Tomorrow in Supreme Court’s Room no. 4 at 10.30 am a bench of Justices Dipak Misra and Prafulla Pant will pronounce the judgment regarding the constitutional validity of criminal defamation (Sections 499 and 500 of IPC and section 199 of CrPC).

The CCG Blog

A Supreme Court bench of Justices Dipak Misra and Prafulla Pant is hearing a set of at least thirty petitions challenging the constitutional validity of criminal defamation (Sections 499 and 500 of IPC and section 199 of CrPC).

The summary of hearings from the first six days can be found here.

View original post