Supreme Court Verdict on 4G in Jammu and Kashmir Undermines the Rule of Law

The court agreed with the petitioners that the government was going against previously laid down principles – and then did nothing about it.

By Shrutanjaya Bhardwaj

This piece first appeared on the Wire on May 14, 2020

On May 11, the Supreme Court rejected a petition seeking the restoration of 4G internet services in the union territory of Jammu and Kashmir. The plea was premised on the rights violations caused by suspending the internet during a pandemic and national lockdown, including the rights to health, education, freedom of speech, freedom of trade and access to justice.

Specifically, the petition alleged violations of a January 2020 judgment of the Supreme Court, in Anuradha Bhasin vs Union of India. The court had then laid down important safeguards that the government should follow before imposing an internet shutdown.

The 4G judgment undermines the rule of law. In the judgment, the court accepts that the government has violated Bhasin, but itself fails to apply the relevant principles laid down in Bhasin. In addition, the court finally abdicates the judicial task of deciding upon the constitutional validity of the internet suspension to a “Special Committee” – composed of members of the executive.

Unconstitutional but permissible?

The most striking feature of the 4G judgment is the somewhat clear, somewhat cryptic acknowledgement that the government has violated the law laid down in Bhasin on two counts.

First, in Bhasin, the court had held that the minimal requirement for any suspension order to be lawful is that it must list the reasons for imposing restrictions: “[O]rders passed mechanically or in a cryptic manner cannot be said to be orders passed in accordance with law.” Relying on this holding, the petitioners argued that since the repeated suspension orders pertaining to Jammu and Kashmir did not disclose any reasons, they contravened Bhasin. The court agreed.

Second, in Bhasin the court was clear that any restrictions on the freedom of speech must satisfy the “proportionality” test – which means the restrictions must be a proportionate response to the aim sought to be achieved through the restrictions. Proportionality is judged by looking, among other things, at the “territorial extent” of the restriction. This means the internet must only be suspended in regions where an imminent threat to public order exists. The petitioners relied on this holding and challenged the suspension orders on the ground that they apply to the entire union territory, without explaining why such a need exists. Once again, the court agreed.

Surprisingly, however, despite agreeing with the petitioners on both counts, the court refused to invalidate the suspension orders. It held that while the petitioners’ submission would merit consideration in “normal circumstances”, the present situation in Jammu and Kashmir is “compelling” and warrants consideration.

Thus, in a unique approach to rights adjudication, the court carved out an ad hoc exception to the norms of legality and proportionality enunciated in Bhasin – in extraordinary circumstances, the court seemed to imply, constitutional safeguards are suspended.

Selective 4G access to specific websites

One facet of proportionality is that the state’s measure must be the “least restrictive” way of achieving the aim that the state seeks to achieve. In other words, out of a given set of alternatives – all of which can achieve the state’s aim – the state is obligated to choose the alternative that least burdens the right(s) in question.

According to the Bhasin bench, “before settling on [a] measure, the authorities must assess the existence of any alternative mechanism in furtherance of the… goal.” In the context of internet suspensions, one way to judge least restrictiveness is to analyse whether access has been cut off or downgraded only to particular websites which pose a threat or to the web as a whole.

In Bhasin, the court held that the state must consider the feasibility of selective blocking before resorting to a total internet shutdown. In Jammu and Kashmir, for instance, the government has been citing only social media websites as the main cause for concern because they help spread terrorism and fake news. Yet, 4G is not made available for any website, including governmental, educational, medical or news websites.

The Bhasin judgment records a specific query that the court had put to the Solicitor General – whether it was feasible to suspend only social media services rather than the entire internet. The Solicitor General had responded by saying that the same could not be done. Contrary to the Solicitor General’s claim, however, selective blocking has been employed by the government in Jammu and Kashmir after Bhasin.

2G internet was first restored in parts of the region on January 14, but only “whitelisted” sites were permitted to be accessed over the network. The number of whitelisted websites was gradually increased through the seven subsequent orders, until all websites were finally made accessible over 2G on March 4. Is it not similarly possible to selectively allow 4G access to some websites while permitting 2G access to others?

It was important to ask whether the government has explored that alternative. Yet, in an unfortunate oversight, the 4G judgment does not address the possibility of selective access at all. Contrary to the principles recognised in Bhasin, it does not hold the government accountable for its failure to consider less restrictive alternatives.

Another committee?

Finally, in a curious move, the court has set up a “Special Committee” – the Union home secretary, the Union communications secretary, and the chief secretary of Jammu and Kashmir – to “immediately” decide whether the prevalent internet restrictions are necessary. Seemingly as solace, the committee has been directed to consider the petitioners’ arguments as well. However, this is deeply problematic for at least two reasons.

First, this amounts to judicial abdication of responsibility. The constitution entrusts the function of rights adjudication exclusively to the high courts and the Supreme Court. Indeed, Article 32 of the constitution, which “guarantees” the right to approach the Supreme Court to remedy the violation of fundamental rights, prohibits the Supreme Court from abdicating in this fashion. In Prem Chand Garg vs Excise Commr., 1963, Justice Gajendragadkar eloquently spoke about the nature of the guarantee contained in Article 32:

“It is true that [fundamental] rights are not absolute…. But, the scheme of Article 19 illustrates, the difficult task of determining the propriety or the validity of adjustments made either legislatively or by executive action between the fundamental rights and the demands of socio-economic welfare has been ultimately left in charge of the High Courts and the Supreme Court by the Constitution…. The fundamental right to move this Court can, therefore, be appropriately described as the corner-stone of the democratic edifice raised by the Constitution. That is why it is natural that this Court should, in the words of Patanjali Sastri J., regard itself “as the protector and guarantor of fundamental rights,” and should declare that “it cannot, consistently with the responsibility laid upon it, refuse to entertain applications seeking protection against infringements of such rights.”…. In discharging the duties assigned to it, this Court has to play the role “of a sentinel on the qui vive” and it must always regard it as its solemn duty to protect the said fundamental rights’ zealously and vigilantly.” (emphasis added)

The constitution, therefore, does not leave the Supreme Court with the option of abdicating its duties in favour of a committee no matter how special. Coupled with the general deferential approach evident from the judgment, this abdication by the court might have the unintended and unfortunate effect of signalling to the government that the extraordinary situation in Jammu and Kashmir is a warrant to commit unconstitutional action without accountability.

Second, passing the buck to the “Special Committee” amounts to making the executive a judge in its own cause. The suspension order that was under challenge in the 4G case was passed by the government of Jammu and Kashmir, and the Committee formed to decide upon the validity of that order includes the chief secretary of the same government (Respondent No. 1 before the Court). The other two members of the Committee – the Union home secretary (Respondent No. 2) and Union communications secretary – are both part of the Central government, which practically dictates the terms in Jammu and Kashmir.

Therefore, this abdication by the court completely abandons the principle of checks and balances by asking the executive to review its own orders.

The court should have been stricter in its approach. It should have sought a justification from the government for not applying its mind to lesser restrictive alternatives. It should have remained consistent with the law it laid down in Bhasin and struck down the admittedly unreasoned suspension orders.

Most of all, it should not have abdicated its responsibility in favour of the government itself. Although such an attitude would not prevent the passing of fresh suspension orders, it would certainly compel the government to think more seriously and narrowly tailor its future orders so that they only fit existing security needs and go no further.

Shrutanjaya Bhardwaj is a Delhi-based lawyer and a Fellow at the Centre for Communication Governance at National Law University Delhi

Supreme Court’s order on Kashmir internet shutdown: Judicial abdication or judicial restraint?

This post first appeared on Times of India on May 12, 2020

The Supreme Court on Monday pronounced its order in the Foundation of Media Professionals v. Union Territory of J&K (for the restoration of 4G services in Jammu and Kashmir).

The Court did not allow for the restoration of services – nor did it engage with the arguments of the parties in its order. Instead, the Court asked a special committee headed by the Union Home Secretary and comprising of the Secretary of Department of Communications, Government of India and the Chief Secretary of the Union Territory (UT) of Jammu and Kashmir (J&K) to examine the prevailing circumstances in the UT and determine whether the restrictions on internet services should continue.

Arguments of Parties

The current petition was filed by Foundation for Media Professionals, a not for profit comprising of journalists to uphold media freedom and promote quality journalism. In its petition, the foundation prayed for the restoration of 4G services in J&K with immediate effect. Apart from raising the challenge on the ground of right to freedom of speech and expression [Article 19(1)(a) of the Constitution], the petition also contended a violation of Articles 19(1)(g), 21 and 21A of the Constitution.

According to the petitioners, the restriction on 4G internet in the times of Covid-19 restricts the right to business, education, health, and speech and expression of the people of J&K. The restriction makes it impossible for individuals in J&K to access information, government advisories, and orders relating to Covid-19. It makes it impossible for doctors to have video consultations and prevents the doctors in the UT from gaining access to the latest studies and treatments of Covid-19. This violates to right to healthcare of the people and is a violation of Article 21. The right to access to justice of people in J&K is also restricted (since most courts are only functioning through video conferencing and filing is also taking place online), thereby violating Article 21. These restriction also prevent a large number of people in J&K from complying with work from home orders of the government and violates the right to trade under Article 19(1)(g) and right to livelihood under Article 21.

The petitioners also argued that the order by the J&K administration does not adhere to the requirements laid down by the Supreme Court in the recent judgment in Anuradha Bhasin v. Union of India.

A significant argument of the petitioner was also that given the situation arising due to the spread of Covid-19 and the unprecedented times we are in, the restriction on 4G services is disproportionate since it applies to the entire J&K.

The government, on the other hand, argued that because of the prevailing security situation in J&K and the use of the internet by insurgents and terrorists to spread violence, it is not possible to provide 4G services in the region. It also contended that there is no restriction over broadband and fixed line internet, and that the government is taking alternate measures to provide information relating to Covid-19 and for the education of students in the region.

Anuradha Bhasin and Guidelines for Internet Shutdown

In Anuradha Bhasin, the Court laid down various guidelines/ safeguards which the government needs to follow before ordering an internet shutdown.

It held that the shutdown order should specify the exact duration of a shutdown and it cannot be indefinite. It directed the Review Committee formed under the Temporary Suspension of Telecom Services (Public Emergency or Public Safety) Rules, 2017, to review the shutdown orders every seven days.

Additionally, the Court stated that these orders must pass the test of proportionality. It held that the government must identify the exact stage of public emergency before shutting down the internet, since that will assist the committee in determining the proportionality of the measure.

However, despite laying down all the principles, the Court did not decide the validity of the shutdown orders and passed on this job to the review committee.

The Current Order: Second Round of Judicial Abdication on Internet Shutdowns?

In the current case, despite having the benefit of the Auradha Bhasin guidelines, the Court did not apply them. As stated above, it instead asked a special committee to determine to question of the continuation of the internet restrictions.

The Court starts by stating that fundamental rights need to be balanced with national security concerns. It rightly points out the importance of national security concerns prevailing in J&K and their role while deciding on the restrictions.

In Bhasin the Court already acknowledged that modern terrorism relies heavily on the internet, noting that the internet is being used to support proxy wars and to raise money, recruit and spread of propaganda. It has been well established that infiltration attempts increase in Kashmir valley from May every year. This year also around 300 terrorists are waiting to cross over from Pakistan occupied Kashmir to India. There is also a fear of terrorists using drones in J&K.

In light of this, a clear public emergency situation exists. However, the question is whether the situation is the same in the entire Union Territory of Jammu and Kashmir and requires a restriction on 4G services in the entire region, or is the restriction overbroad?

The Court, in its order, found that the order prohibiting 4G internet services while limited does not specify the reasons for the restriction through J&K. The Court has held that the order for limiting of services should only be for areas, “where there is absolute necessity of such restrictions to be imposed, after satisfying the directions passed earlier” [in Anuradha Bhasin].

However, strangely the Court states that “A perusal of the submissions made before us and the material placed on record indicate that the submissions of the Petitioners, in normal circumstances, merit consideration. However, the compelling circumstances of cross border terrorism in the Union Territory of Jammu and Kashmir, at present, cannot be ignored.”

While, the prevailing security situation in J&K may be a legitimate aim to restrict the 4G internet service and should be a factor in determining the proportionality of the restrictions, the order does not explain how this is a factor for the Court to refuse to apply its own judgment and the legal principles laid therein is not explained.

The Court should have found the current J&K Order for restricting 4G services illegal and struck it down for not complying with the guidelines under the Telecom Suspension Rules and safeguards laid down in Bhasin. To balance it with the security concerns in J&K – the Court could have additionally provided the J&K administration a few days to come up with a new order (if they so desired), which complies with the guidelines.

The Road Ahead

The Court has directed the special committee to look at the material presented by petitioners, examine the alternatives, including the petitioners suggestions of placing restrictions only in areas where there is a serious public emergency situation and allowing 3G/4G internet in certain areas on a trial basis.

While it does not provide immediate relief to the residents of J&K, this judgment like Bhasin is a little step forward in making internet shutdowns in India more transparent, proportional and accountable. The order by the Jammu & Kashmir administration prohibiting 3G/4G services in the region expired yesterday. One can only hope that if a new order is passed, the administration will comply with the guidelines under Bhasin and limit the restriction only to areas where there is an actual security threat.

In a country which has the most number of internet shutdowns in the world, these incremental steps by the highest Court of the country may not be enough. Ultimately they leave the fate of a large part of India population in the hands of the bureaucrats – who may not be the best suited to make these decisions on proportionality. However, along with Bhasin, today’s order and its limited reasoning is something to be built upon in future challenges to internet shutdowns in India.

Facebook-Jio Deal: Big Data, Competition and Privacy

By Anupriya Dhonchak

This post was first published on the IndiaCorpLaw blog on May 8, 2020

Picture from here

Facebook recently signed an all cash deal worth ₹ 43,574 crores to acquire a 9.99 percent stake in Jio Platforms, a subsidiary of Reliance Industries Limited’s (RIL). It is the largest foreign direct investment (FDI) in the technology sector in India thus far. The deal brings together Jio, India’s biggest telecom and internet service provider with over 370 million users, with Facebook which has over 250 million users in India, and WhatsApp with over 400 million Indian users. Concurrently, Jio Platforms, Reliance Retail and WhatsApp have also entered into partnership to accelerate JioMart and support small businesses through WhatsApp.

The deal raises several competition and privacy concerns. The large user base of these companies has individually already been a cause of concern since it allows them to profit from network effects, making the entry of new players virtually impossible and driving out existing competition. The deal significantly expands this user base and amplifies its resultant network effects. Network effects refer to the advantages of a large user base of consumers and sellers, allowing corporates exclusive access to countless data sources and providing them unparalleled business opportunities. These companies are backed by investors with deep pockets willing to bear sustained losses for predatory growth in winner-takes-all markets. India’s 2019 draft e-Commerce Policy noted that network effects must be looked at while analyzing mergers and acquisitions.

In 2017, a nine-judge bench of the Supreme Court of India unanimously held that the right to privacy was a fundamental right and considered it integral to freedoms guaranteed across Part III of the Constitution. Despite this, the draft e-Commerce Policy characterises data as a public good or a national asset. The last Economic Survey regarded privacy as an elite preference that must not be imposed on the poor. Similarly, the UNCTAD’s Digital Economy Report 2019 promises a future wherein the greatest value for developing countries can be mined from “the monetization of large-scale digital data”.

These narratives around the seductive potential of data for competitive commerce enable BigTech’s race to the bottom to extract as much data from individuals as possible for optimal use of market opportunities. This can be problematised by appreciating the impact of corporate power over society, reorienting competition law towards its foundational justifications and concretising the following recommendations and observations.

Attempt to Monopolise

In late 2018, the Government constituted a Competition Law Review Committee (CLRC) to review and suggest changes to the Competition Act. In its final report submitted in 2019, the CLRC discussed the possibility of a provision to penalise the mere attempt to monopolise in the relevant market where a particular product or service is sold.

However, players other than dominant firms may also be able to cause significant anticompetitive effects based on their unilateral conduct. This is illustrated by Jio’s entry into and rapid capture of the telecom market by offering services initially for free and subsequently for negligible prices. In 2015, when Jio was formally launched, India had ten private sector wireless providers, which has reduced to three now. Jio’s predatory pricing caused competitors to cut tariffs and, by March 2018, the telecom operators were in “severe financial distress” with a cumulative debt of ₹ 7.7 lakh crore and revenue under ₹ 2.5 lakh crore. Experts have cautioned against firms’ ability to engage in predatory conduct even prior to achieving substantial market power, which is reflected in amendments to competition law in the US, Germany and Japan. However, before making any concrete recommendation regarding this, the CLRC suggested a detailed study of digital markets in India.

CCI’s Market Study on e-Commerce

The Competition Commission of India (CCI) recently published a market study on e-commerce in India. The study, which is pertinent for the FB-Jio deal, highlights the many competition concerns raised by the operation of e-commerce platforms in India. According to the study, numerous e-commerce players in India compromise platform neutrality by according preferential treatment in marketing and selling products by their own subsidiaries, related parties or others. There is a lack of transparency regarding search rankings and user reviews. This leads to e-commerce platforms determining market outcomes such as sales, prices and consumer traffic as opposed to the competitive merits of the products. This is because of platforms’ exclusive access to massive transaction data and ability to control search rankings. JioMart has already gone live over WhatsApp and allows users to place orders dispatched to local kirana (grocery) stores. This is crucial since RIL plans to sell its own private labels through these kirana stores under brand names such as Best Farms, Good Life, etc.

The study also revealed the unfair and discriminatory contractual terms for different entities and how that evidences small businesses’ lack of countervailing power against and dependency on e-commerce platforms. Even if there are no significant switching costs between platforms, there is a lack of substitutes available. This is because all major platforms have similar practices. Further, sellers consider it essential to have visibility over all large platforms to survive in the market and cannot afford confining themselves to the offline segment or selective platforms or both. This allows platforms to engage in exclusive contracts with sellers, mandatorily bundle the platforms’ delivery services with their listing services and compel sellers to fund the platforms’ deep discounts over their products. There is no information regarding what constitutes the basis of platforms’ discounting practices. These discounts cannot be matched by the sellers offline on their own and drive consumer traffic to online platforms, making consumers and sellers dependent upon them via artificial price distortions.

The study also revealed that platforms engage in ‘data masking’, i.e., refusing to share vital customer information with market participants on the pretext of privacy while mining customer data themselves to launch their own products. It concluded with certain recommendations for self-regulation despite BigTech’s deliberate failure to self-regulate globally. Recently, a Wall Street Journal report alleged that Amazon used data regarding third party sellers to launch its own competing products. As a result, U.S. Congress members have called on Amazon’s CEO, Jeff Bezos, to testify before the Judiciary Committee of the House of Representatives, as part of an ongoing antitrust probe.


The CLRC report highlights Indian competition law’s growing cognizance to the relationship between market power and control over data. It noted that the definition of ‘price’ under section 2(o) of the Competition Act, 2002 is broad enough to include non-monetary considerations such as personal data and preferences revealed to digital market players. It also noted that section 3(4)(c) is broad enough to cover refusal to deal encompassing restrictions on selling and buying through exclusive arrangements. Further, section 19(4)(b), referring to ‘resources of the enterprise’, used to assess the dominance of firms, was interpreted as wide enough to include control over data. Section 19(4) was also regarded as being inclusive enough to consider ‘network effects’ as a relevant factor for the determination of a firm’s dominance. The CLRC also suggested the introduction of necessary thresholds to ensure that digital transactions involving asset light businesses causing an appreciable adverse effect on competition do not evade competition assessment.

These interpretations of the existing Act are crucial to the competition analysis of this deal, which is yet to receive the CCI’s approval. Some of the competition concerns raised can also be mitigated by amending the Competition Act to penalise any potential attempt to monopolise the relevant market. Finally, competition law regulates state enabled markets. Its inherently political content needs to be acknowledged to ensure that technicalities and overreliance on self-correcting markets do not compromise the democratic justifications for its existence in the first place.

Anupriya Dhonchak is a IV year student at the National Law University Delhi, and works on technology law with the Centre for Communication Governance at National Law University Delhi

The publication of COVID-19 quarantine lists violates the right to privacy

By Sanya Kumar and Shrutanjaya Bhardwaj

This post first appeared in the Caravan on April 5, 2020

On 3 February, Kerala declared a state disaster on account of the novel coronavirus. Incoming passengers at the Trivandrum International Airport in Thiruvananthapuram, were asked to sign declarations stating that they have not recently travelled to China. A month later, the union ministry of health and family welfare directed all incoming international passengers to fill self-declarations forms and undergo health-screenings at the point of entry. CREATIVE TOUCH IMAGING LTD./NURPHOTO/GETTY IMAGES

On 19 March, the district administration of Mohali, a satellite city of Chandigarh, published a “quarantine list” on their official website. This list had names of people who had been placed under quarantine as suspected carriers of the novel coronavirus. It also included other personal details, such as residential addresses and phone numbers. The authorities claimed that the identities of the quarantined people were revealed due to “social pressure,” and that outing those under quarantine was necessary to contain community transmission. The deputy commissioner further said that this way “people will get information about such persons while sitting at home and they would be vigilant to avoid contact with them and their family members.” In the following days, several other government authorities, including in ChandigarhKarnatakaOdishaDelhiNagpurAjmer, and Mumbai,  prepared such “quarantine lists.” The lists were either published on their publicly accessible websites or eventually leaked through unidentified channels.

The data that was eventually used to curate these quarantine lists was first collected by the government of India, under the aegis of the union ministry of health and family welfare. On 3 March, the MOHFW mandated that all international passengers entering India would have to fill self-declaration forms, submit the forms to health officials and immigration officials, and undergo health screenings at the points of entry. In essence, this form operated as a prerequisite for entry into India and sought personal information, including name, residential address, phone number, port of departure and final destination.

Shockingly, it was this data obtained from incoming passengers that was used to curate the quarantine lists. All these lists included personal details in varying measures, ranging from names, phone numbers, residential addresses and port of journey, and were freely floating around on Whatsapp or Telegram groups, within a few hours. If you were on one of these lists, by that evening, everybody had your personal information, and your neighbours viewed you with suspicion. Needless to say, these quarantine lists ended up operating as target lists—they have led to people facing severe harassment, ostracisation, stigma and anonymous hate-calls.

Till 6 March, incoming passengers were required to fill the form and were advised to isolate themselves if they experienced symptoms within 28 days after return from COVID-19 affected areas. Four days later, passengers from China, Hong Kong, Republic of Korea, Japan, Italy, Thailand, Singapore, Iran, Malaysia, France, Spain and Germany were advised to undergo self-imposed quarantine for a period of 14 days from the date of their arrival. Progressively though, the guidelines were made more stringent. On 11 March, the MOHFW declared that while passengers from these destinations shall be mandatorily quarantined for a minimum period of 14 days, passengers from other destinations could also be quarantined for the same period. Five days later, the categories of passengers who would undergo mandatory quarantine were further expanded. Eventually, on 18 March, the MOHFW published its standard operating procedure which stated that passengers without risk factors would be strictly under “Home Quarantine,” or face penal sanction, while high-risk passengers would be under “Government supervised quarantine” at a paid hotel or a government facility. 

Neither the MOHFW guidelines nor the self-declaration form mentioned the purpose for the collection of the personal information. However, one could assume that the aim was to alert people if a co-passenger was subsequently diagnosed with COVID-19, or to check on people who might be experiencing symptoms and were asked to be under home quarantine.

The dissemination of this information in the form of “quarantine lists” has now caused a backlash and people have voiced concerns over the breach of trust and their privacy being compromised. Representatives of different authorities have offered justifications for the dissemination, claiming that this was necessary to contain the spread of COVID-19 in the communities, create social pressure and deter people from violating home-quarantine. For instance, Sanjeev Kumar, the divisional commissioner of Nagpur, justified the move and said that, “We just want people to keep an eye in the neighbourhood and inform us if they see these people socializing.” However, the question remains whether the dissemination of the quarantine lists violates the right to privacy.

Before a violation of privacy can be determined, a preliminary aspect to be addressed is whether the right to privacy is interfered with at all. The answer to this depends on a test enunciated by the Supreme Court in two landmark judgments. On 24 August 2017, a nine-judge bench of the apex court ruled that the right to privacy is a fundamental right guaranteed by the Indian Constitution in KS Puttaswamy vs Union of India. Puttaswamy, a retired judge of the Karnataka High Court, had challenged the government over Aadhaar cards and this judgment came to be known as Puttaswamy I. On 26 September 2018, a five-judge bench declared that the Aadhaar Act of 2016 did not violate the right to privacy. This judgment is commonly referred to as Puttaswamy II.

Under this test, the right to privacy is compromised only if a “reasonable expectation of privacy” existed and was breached. The doctrine of reasonable expectation has both a subjective and an objective element. The former is met if the individual subjectively expected the information to be kept private. The latter is met if the individual’s expectation was objectively reasonable.

The publication of quarantine lists violated the reasonable expectation of privacy of the concerned individuals for at least three reasons. These reasons relate to the natureof the information collected, the contextin which it was collected, and the seriousnessof the privacy claim.

First, the information collected, and later published, was personal in nature. Phone numbers, residential addresses and email addresses are identifiers that provide direct access to an individual, thus providing an easy means of intrusion. In Puttaswamy I, the Supreme Court took note of the power of data in today’s age and held that the right to privacy implied full control over one’s personal information. The Data Protection Bill introduced in parliament last year, states that “personal data”—such as phone number, house address and email address—shall not be used for any purpose without the concerned individual’s consent. Justice D Kaul’s observations in Puttaswamy I capture this idea perfectly: “An individual has the right to control one’s life while submitting personal data for various facilities and services.” In the case of the quarantine lists, the nature of the information collected raises a “reasonable expectation” in the passengers’ minds that the information would be kept confidential.

Second, the context in which the information was collected also points to a reasonable expectation of privacy. The act of filling forms at airports ordinarily implies private communication between the individual and the state, such as in the case of customs’ declarations. But in this case the passengers were not informed that their information could be made public in future. None of the passengers could reasonably foresee that the data would be published anywhere, in any form, much less that it would be uploaded on the official websites, in a curated form.

Third, the privacy claim at stake here is serious. Publication of sensitive private information is likely to expose the person concerned to stigma, harassment, and even racism. According to several news reports, many individuals have complained of repeated anonymous calls and harassment at the hands of media, landlords, neighbours and residents’ associations,  sometimes triggering health problems, after their private data was made public.

These factors clearly establish that a reasonable expectation of privacy existed. Hence, the publication of quarantine lists interfered with the right to privacy. Consequently, it’s imperative to examine if this interference was justified.

Puttaswamy I laid down a three-part test to examine if an “interference” is justified: first, whether the action is sanctioned by law; second, whether the action is aimed at achieving a legitimate aim; and third, whether the action is necessary and proportionate for the achievement of that aim.

The publication of the quarantine lists fails the first prong of the test. Neither the self-declaration form nor the quarantine list disclosed any statutory basis. The entire exercise comprised of four different steps: first, the collection of information from incoming passengers by the central government as a prerequisite for immigration; second, the sharing of information by the central government with the state governments; third, the curation of this information into quarantine lists by the state governments; and fourth, the dissemination of this information by the state governments.

In the present case, neither the Disaster-Management Act, 2005 nor the Epidemic Diseases Act, 1897 vest the central or the state government with any express powers that provide a basis for any of these four steps. As such, the collection, collation and dissemination of this personal information is not sanctioned by law.

Even if one were to argue that the central government relied on the residuary powers under section 6 of the disaster-management act, while the state government sources its powers from section 2(1) of the epidemic diseases act, the respective authorities would still have to demonstrate that these measures were “necessary” for dealing with the disaster and preventing the outbreak of the epidemic or its spread. 

Next, it is imperative to examine if the publication of the quarantine lists passes the second prong of the test. Justice DY Chandrachud, who spoke for four of the nine judges in Puttaswamy I, held that the court will not “reappreciate or second guess the value judgment of the legislature” except if the value judgment is “manifestly arbitrary.” The Supreme Court has consistently held manifest arbitrariness to imply something done “capriciously, irrationally and/or without adequate determining principle.” Since Puttaswamy I does not give an exhaustive list of aims that would qualify as “legitimate,” the state’s stated purpose has to be analysed for “manifest arbitrariness” in every single case.

The state governments have put forth three different aims to justify the publication of the quarantine lists, namely, deterrence, social pressure and information and safety.  

The argument of deterrence is irrational as it militates against the ultimate aim of preventing the spread of COVID-19. The authorities’ rationale appears to be that people will strictly observe the lockdown to avoid the risk of publication of their personal details online. Publication would occur only if they were home quarantined, which would happen only if they or someone in close contact contracted the disease, to avoid which they must obey the lockdown. But this chain of thought misses the critical link between contracting the disease and being home quarantined, which is that the individual must reportthe symptoms. If reporting leads to the undesirable outcome of publication of one’s personal details online, would individuals not be deterred from reporting? It is irrational to deter individuals when we desperately need them to come forward and cooperate.

The second aim, social pressure, is even more problematic. The rule of law expects the democratically elected state functionaries to use their own wisdom in making decisions. This expectation is further heightened when the state’s actions impact fundamental rights. Rights are a constitutional commitment, and even though everyone agrees that they are not absolute, the least they demand is sincere care and consideration on the part of the state. Any attempt to restrict these rights must hence be carefully thought out. In this framework, external pressure is the last thing that should guide state action impinging on rights; indeed, acting to please the mob amounts to an abdication of constitutional duty.

The third aim, however, would qualify as legitimate. Chandrachud’s observations in Puttaswamy I indicate that preservation of public health and safety is a legitimate state aim. Even globally, it is recognised as a legitimate ground to restrict privacy rights. For instance, Article 8 of the European Convention on Human Rights states that the right to privacy may be restricted “for the protection of health.” Likewise, Article 29 of the Universal Declaration of Human Rights provides that all rights, including the right to privacy, may be restricted for “meeting the just requirements of… the general welfare in a democratic society,” and also for “securing due recognition and respect for the rights … of others” which would include the right to health recognised in Articles 22 and 25 of the declaration.

Since a legitimate state purpose exists, the publication of quarantine lists would satisfy the second prong of the three-part test. However, the overall validity of the measure will turn on how that valid purpose was pursued.

The third prong of the Puttaswamy test requires the state action to be necessary and proportionate to the legitimate aim being pursued. This raises a few pertinent questions: Did the dissemination of personal information in the form of quarantine lists have any rational nexus with the legitimate goal of securing public health? Was such dissemination required or merely desirable? Did the state adopt the least restrictive measure possible? Did the state strike a fair balance between the public interest at stake and the individual’s right to informational privacy?

In the present case, preservation of public health by reducing the risk of community transmission is a legitimate goal. But what goal would lists published after the imposition of the lockdown serve, when people were already prevented from leaving their homes?

Some observations by Chandrachud, in Puttaswamy I, become relevant at this juncture. Explaining how “anonymity” can be used to protect “privacy”, he observed:

“Privacy involves hiding information whereas anonymity involves hiding what makes it personal. An unauthorised parting of the medical records of an individual which have been furnished to a hospital will amount to an invasion of privacy. On the other hand, the State may assert a legitimate interest in analysing data borne from hospital records to understand and deal with a public health epidemic such as malaria or dengue to obviate a serious impact on the population. If the State preserves the anonymity of the individual it could legitimately assert a valid State interest in the preservation of public health to design appropriate policy interventions on the basis of the data available to it.”

Admittedly, it is important to draw people’s attention to the potential risk of community transmission in their locality. It is equally essential for people under home quarantine to follow the guidelines and not step out. However, all of this could have been achieved in an anonymised manner, without the disclosure of any personal details. The authorities could have provided data on the number of people infected with COVID-19 or under home quarantine in a locality, and the areas that they might have visited, to ensure that others understand the gravity of the situation and regulate their conduct. They could consider putting stickers in the vicinity of houses of people who are quarantined, without disclosing their names or exposing their personal information on a platter. Depending on the locality, these stickers could also be put around common spaces or outside lanes providing information about the number of people quarantined in the particular lane. Instead of expecting neighbours to report each other, and create hostility, ostracisation and stigma, under the garb of “cooperation from community members,” authorities could explore other less restrictive alternatives. The local administrations could organise inspection visits, surprise checks, calls on landlines and video calls, to ensure compliance.

In fact, some state governments have launched mobile applications that provide graphic representations of the number of people who are quarantined around you, route maps and contact tracing. Although such applications themselves have privacy implications for their reliance on surveillance, and features that allow one to report people who are “violating” the quarantine, the anonymised graphic representations appear to be less restrictive than publication of quarantine lists.

Thus, although there is an onus on the state to find the least restrictive measure to achieve the legitimate goal, it has miserably failed to do so in the case of the quarantine lists which unjustifiably infringe the fundamental right to privacy. While the COVID-19 pandemic has led to an extraordinary situation, it is in times like these that the state’s commitment to protection of rights is put to test most rigorously. While combating COVID-19, the state must go the extra mile to ensure that the right to privacy is not quarantined in the process.

Sanya Kumar is an advocate in Delhi and a graduate of the National Law University Delhi and Yale Law School

Shrutanjaya Bhardwaj is an advocate in Delhi and a Fellow at the Centre for Communication Governance at National Law University Delhi. He is a graduate of the National Law University Delhi and University of Michigan Law School

Caught on Camera: India is Woefully Unprepared for Facial Recognition Technology

Varsha Rao

The current discourse surrounding the use of facial recognition technology in surveillance operations, prompted by the recent ban in San Francisco, is populated by cost-benefit analyses – the cost of privacy and freedom of assembly versus the benefit of nabbing criminals and identifying missing children.

The problem with such an approach is that it pits individual rights and freedoms against the State’s duties without allowing for space to address shortcomings. It creates a shallow profile of the person opposing or favouring the technology. If you value your privacy and oppose real-time surveillance, you must be soft on crime. If you are willing to implement large-scale surveillance to track the thousands of children that go missing every year, you must be indifferent to the prejudices faced by minority communities.

The country is at a point in time where widespread use of facial recognition technology by law enforcement officials (such as Punjab Artificial Intelligence System and Chennai’s FaceTagr) and private companies (Paytm) is taking place without any legislation to regulate its implementation. On one hand, that is an alarming reality to face. On the other hand, since nothing has been set in stone yet, there is a sizeable opportunity to develop regulations that will wholeheartedly attempt to address the spectrum of citizen concerns.

The Obvious Red Flags in India’s Social Fabric

According to a deputy police commissioner in Chennai, to avoid misuse of their facial recognition technology, police personnel have been instructed to refrain from using the application unless they find a person suspicious. For a country steeped in caste-based and communal prejudices, we cannot brush aside the extent to which the concept of a “suspicious person” can be corrupted at the individual policeman’s level. There is ample evidence from the United States to warn us of biases that manifest in the form of tragic police killings.

India continues to live under the shadows of the Criminal Tribes Act of 1931 – the predecessor of the Habitual Offenders Act, 1952 – which links unfounded allegations of hereditary criminality to certain marginalized communities. Furthermore, a socioeconomic study of prisoners on death row in India (published in 2016) yielded distressing insights into the criminal justice system ~35% of death row inmates belong to the OBC community and 25% to the SC/ST community. Religious minorities were found to comprise 21% of death row prisoners. This is not to say that purely direct discrimination is at play on the part of the police force and criminal justice system, but it is enough to hint at subconscious prejudices and microaggressions permeating India’s justice delivery mechanisms.

Lessons from the DNA Technology Bill

Legal expert Usha Ramanathan had pointed out in her dissent note on The DNA Technology (Use and Application) Regulation Bill, 2018 that the proforma in use by Government agencies, such as the Centre for DNA Fingerprinting and Diagnostics (CDFD), inquires about the caste of the person whose DNA is being collected. Instances such as this play right into our suspicions about prejudiced profiling.

Important observations can also be drawn from the test carried out by American Civil Liberties Union (ACLU) of Amazon’s facial recognition tool “Rekognition” and its aftermath. The test revealed that the software had incorrectly singled out 28 members of the U.S. Congress as people who have been arrested for a crime. The false matches in the test disproportionately comprised of people of colour – nearly 40% of Rekognition’s false matches.

In response, Amazon highlighted that the confidence threshold of 80% used by ACLU is appropriate for general use cases (such as identifying celebrities on social media) but not for public safety use, thereby leading to false positives. The recommended confidence threshold of 99% resulted in a misidentification rate of zero in tests conducted by Amazon. What is interesting to note is that in 2017, Amazon’s blog post demonstrated the use of Rekognition to identify persons of interest for law enforcement using a confidence threshold of 85% (indicated by the variable ‘faceMatchThreshold’ in the code). After the findings of the ACLU study, Amazon went from recommending 95% or higher as the confidence threshold to 99%.

When the creators of the software itself cannot keep its numbers straight, how much confidence can we realistically repose in our law enforcement officials?

The false positives generated by DNA evidence can shed some light on the potential consequences of using new technology such as facial recognition. The science of DNA profiling is a probabilistic and statistical study – first, the likelihood of the collected DNA belonging to the suspect is analysed, followed by the likelihood that the DNA sample could belong to someone else in a given population. Unfortunately, when DNA profiling is sold as the be-all-end-all of nabbing criminals, statistical nuances are left out of the argument. Unless police personnel and judges are trained to fully comprehend the evidence placed before them, it diminishes the value of incorporating science into the criminal justice system and neutralizes any attempt at doing away with wrongful convictions.

The Way Forward

The point of highlighting the potential and actual misuse of facial recognition technology is not to demonize tech companies, hold them accountable for the failures of the State or to ignore the benefits of such technology in cases of missing children and human trafficking. The point, instead, is to create room for improvement and minimize the infringements of emerging technology on fundamental human rights. Developers should not be heralding the benefits while wilfully ignoring feedback and minimizing the appearance of drawbacks.  

If they continue to do so, then they are truly missing the point.

Instead, tech companies need to insist on the establishment of a regulatory framework to govern the use of facial recognition technology on public and commercial premises. If not for the sake of human rights and freedoms, then at least to avoid outright bans on the technology and ensure stability in future policies.

Since India is home to the world’s largest biometric database – the much coveted Aadhar, the country is in a unique position to assume a leadership role when it comes to regulating facial recognition technology. The Government cannot pass the buck of responsibility to the tech companies as it has attempted to do with the problem of ‘fake news’.

Imposing standards of oversight, limiting function creep, exploring issues of privacy and consent, protecting databases from cybersecurity breaches and the redressal of complaints – all fall within the ambit of Government control. Stakeholder consultations with tech companies and civil society will ensure a richness of debate and hopefully, incorporate the voices of the marginalized as they have the most to lose in a surveillance-heavy environment.

The path that India chooses to follow in relation to facial recognition technology must firmly be in the opposite direction of the Chinese government, which has been allegedly deploying such technology to keep tabs on the oppressed Muslim Uighur population. Hopefully, the next Government in power has the political will to pick up the slack in harmonizing effective protections for rights and freedoms with the benefits of emerging technology. But for now, if you happen to spot a camera trained on you in a marketplace such as Chennai’s T. Nagar, don’t forget to smile and wave.

(Varsha is a researcher with the Centre for Communication Governance at National Law University Delhi.)

Will Fake News Decide the World’s Largest Election?

Varsha Rao


Social media platforms like Facebook and Twitter have been hauled up by many an authority for their role as conduits in the dissemination of ‘fake news’, including the United States Senate and India’s Parliamentary Standing Committee on Information Technology.

In the run-up to the Lok Sabha Elections, social media and messaging platforms have put in place strategies to check attempts at spreading misinformation.

To prevent voter suppression (acts aimed at reducing voter turnout), Facebook bans misrepresentations about the voting process, such as claims that people can vote online. The  launch of political advertisement archives in India by Facebook, Google and Twitter has also contributed to the fight against misinformation campaigns.

Fact-checkers are the second line of defence after artificial intelligence and machine learning has had its turn in identifying potential pieces of false news. The judgement and research ability of the fact-checker determines the rating that the highlighted content will receive as well as its priority on platforms, such as on Facebook’s News Feed.

However, entities involved in misinformation campaigns remain undeterred.



Cutting Off a Hydra’s Head: Challenges Befalling Social Media Intermediaries

The existence of multiple forms of social media and competing platforms allows malicious actors to engage in ‘platform shopping’ and utilize methods which throw up fewer obstacles.

An example of this is podcasts or series of audio files. Scrutinizing the content of podcasts for hate speech and misinformation is much more difficult than identifying buzzwords in articles. When platforms disseminating podcasts do not have transparent policies on taking down content, there is no guarantee that flagging a podcast for problematic content will contain its reach. Furthermore, since podcasts are in a nascent stage of popularity, platforms may not have the resources or funding to engage in extensive fact-checking or hire third party fact-checkers.

The emergence of ‘deepfakes’ or artfully doctored photos and videos has contributed to the flow of misinformation on social media as well. Lack of awareness regarding the existence and popularity of ‘deepfakes’ along with the difficulty in spotting manipulations in the footage exacerbates its ability to influence the target audience.

Social media companies are well aware that they are going up against determined actors with the capacity to generate creative solutions on the fly. One such example was observed by an American reporter on a chat channel, Discord. When a Twitter user complained about Twitter’s proactive measures in deleting accounts connected to voter suppression attempts, another user suggested the use of Snapchat filters on photos found online when creating a fake account to evade reverse image searches.

It does not help that certain challenges faced by social media companies have no immediate solutions. In its press releases, Facebook has highlighted the scarcity of professional fact-checkers worldwide, the time it takes for a complex news item to be scrutinized and the lack of meaningful data in local dialects to aid machine learning.

Furthermore, while solutions have been implemented by social media companies in good faith, they have been shown to remain unsuccessful in tackling the problem as a whole. A reporter for The Atlantic drew attention to a loophole in Facebook’s Ad Library authentication process, an otherwise effective dragnet in a sea of insidious advertising. By setting up a limited liability company to act as the publisher of the ad, special interest groups can obscure their identity and continue to sponsor ads on Facebook. The inability to predict users’ behavioural tendencies may also lead to the failure of the solution, such as in the case of WhatsApp where the labelling of forwarded messages may not encourage the recipient to question the legitimacy of the message if the recipient has faith in the credibility of the sender.

While scrutinizing the strategies offered by social media platforms and other intermediaries, it is important to keep in mind that the problem of ‘fake news’ is not a new phenomenon. The introduction of the printing press in the 15th century also unleashed a wave of ‘fake news’ regarding witches and religious fanaticism which would be printed alongside scientific discoveries. Thus, while social media may have amplified its reach – much like a microphone does in the hands of a speaker – it is ultimately the individual spewing vitriol that is the true culprit. The burden of generating solutions cannot be solely borne by the intermediaries.


Is the Government’s Heart in the Right Place: Misplaced Solutions for an Insidious Problem

Unfortunately, as part of their contribution to curb the dissemination of ‘fake news’, the Government has made scarce headway. In April 2018, a directive was issued by the Ministry of Information and Broadcasting stating that the accreditation of journalists found to have generated or circulated ‘fake news’ would be suspended for a time period to be determined according to the frequency of violations and would be cancelled in case of a third violation. The guidelines were immediately withdrawn on the direction of the Prime Minister’s Office after the Government was heavily criticized by journalists and media bodies for attempting to muzzle the free press.

In December 2018, the Ministry of Electronics and Information Technology published the draft Information Technology [Intermediaries Guidelines (Amendment) Rules], 2018, which requires – inter alia – that intermediaries enable tracing of the origin of information and deploy automated tools to proactively identify and remove unlawful content. WhatsApp, a platform with end-to-end encryption, took a stand against breaking encryption and pointed to privacy and free speech concerns to justify their position.

As countries attempt to block the dissemination of ‘fake news’ on the internet and regulate the flow of information on social media platforms, it is imperative to ensure that overbroad definitions and strategies do not end up promoting political censorship.

China’s crackdown on ‘online rumours’ since 2013 is an example of the State controlling information flow. Not only must ‘rumours’ – including content undermining morality and the socialist system – be removed by social network operators, but also their publication could result in a jail term of 3 years for the creator. The licenses required by social media networks to operate in China may be held hostage if their interpretation of ‘rumours’ does not align with the Chinese authorities. This incentivizes overly cautious intermediaries to block or report content that seems ‘fake’ by the Government’s standards, thus leading to collateral censorship.


‘What Is’ versus ‘What Could Have Been’: The Pitfalls of Election Campaigning  

The lack of significant engagement and progress on the ‘fake news’ and misinformation front is certainly a cause for concern as it points to a lack of political will.

The misuse of social media and messaging platforms by the ruling party as well as the Opposition has been widely reported by news outlets. BJP President allegedly told the party’s IT cell volunteers that the 32 lakh-strong WhatsApp groups allow the BJP to deliver any message to the public, even if it is fake. Last month, Facebook took down pages connected to the Congress IT cell as well as an IT firm behind the NaMo app for coordinated inauthentic behaviour and spam. WhatsApp’s head of communications has also interacted with political parties to highlight that WhatsApp is not a broadcast platform and accounts engaging in bulk messaging will be banned.

For political parties, there is much to gain by manipulating public opinion in a country where elections are tightly-contested along narrow margins, and election results have a long-lasting impact on the intricate fabric of national identity. Back in 2013, the Internet and Mobile Association of India (IAMAI) had gathered from a Social Media survey conducted in 35 Indian cities that the votes of only 3-4% of social media users could be swung. Of course, this was before the 2016 U.S. Presidential Elections which saw social media disinformation campaigns being executed with a renewed vigour.

As a starting point, political parties could have agreed to refrain from executing misinformation campaigns and instead, opted to encourage healthy debate based on verifiable facts to influence the electorate. Mud-slinging and propaganda campaigns are tactics that could potentially win elections. However, political candidates cannot ignore the lethal consequences of ‘fake news’ in India and carry on as if it is business as usual, especially when ‘fake news’ has become a life-and-death issue.

In the run-up to its federal elections in 2017, major political parties in Germany entered into a ‘gentleman’s agreement’ to disregard information leaked as a result of cyberattacks instead of exploiting it. An agreement by Indian political parties on the ethics that ought to govern social media use would have underscored the same spirit.

Instead of attempting to increase the burden on intermediaries, the Government could also have undertaken extensive digital literacy campaigns to build resilience against attempts at manipulation, be it domestic or foreign. The campaigns could have been structured to highlight the techniques by which false information is propagated to manipulate the psychology of the voter.

Social media platforms, political parties and the Election Commission form a trinity that shares the responsibility of protecting the authenticity of content informing a voter’s choice. While the degree of responsibility may be different, without collaboration, the goal will remain unachievable. The shortcomings of the political parties do not absolve the social media intermediaries of their responsibility. It took Twitter until half of polling had been completed to launch an anti-voter suppression feature on the microblogging platform. There have also been multiple instances of ‘fake news’ being taken down on other social media platforms but remaining in circulation on Twitter.

The impact of misinformation campaigns on the Lok Sabha elections will be uncovered only once the elections come to an end. The best-case scenario it that it has a negligible impact on the election result. The worst-case scenario? The influence is so pervasive that we will follow in the footsteps of the U.S. and take a minimum of two years to uncover its reach.

Regardless of what ultimately happens, perhaps there is one thing we can all agree on – not enough has been done to protect this “festival of democracy” from being manipulated.


(Varsha is a researcher with the Centre for Communication Governance at National Law University Delhi.)

Securing Electoral Infrastructure: How Alert is India’s Election Chowkidaar?

Varsha Rao

With the publication of Special Counsel Robert Mueller’s much-awaited report on Russian interference in the United States Presidential Elections of 2016, the threat of hacking and misinformation campaigns to influence elections is taking centre-stage yet again. Closer to home, the discussion has become more pertinent than ever before. In a democratic process of gigantic proportions, 900 million Indians across 543 constituencies are expected to cast their vote in 7 phases to elect a Government for the next five years.

The gravity and significance of the ongoing General Elections to the Lok Sabha thus begs the question – how susceptible is the world’s largest democracy to cyber interference?

Interfering in an election in the digital age involves a two-pronged attack – firstly, by influencing the political inclination of the electorate via misinformation campaigns on social media platforms, and secondly, by manipulating the electoral infrastructure itself. This article will focus on the latter, more specifically, the infrastructure and processes administered by the Election Commission of India.

Voter Registration Databases and Election Management Systems (EMS)

Unfettered access to voter registration databases arms malicious actors with the ability to alter or delete the information of registered voters, thereby impacting who casts a vote on polling day. Voter information can be deleted from the electoral rolls to accomplish en-masse voter suppression and disenfranchisement along communal lines in an already polarized voting environment. The connectivity of voter databases to various networks for real-time inputs and updates make them highly susceptible to cyberattacks.

The manipulation of election management systems (EMS) can have an even wider impact on the electoral process. Gaining access to the Election Commission’s network would be akin to creating a peephole into highly confidential data ranging from deployment of security forces to the tracking of voting machines.

Election Commission staff can be targeted via phishing attacks in a manner similar to the cyberattacks executed during the 2016 U.S. Elections. Classified documents of the U.S. National Security Agency (NSA) as well as Special Counsel Robert Mueller’s report confirm that hackers affiliated with the Russian government targeted an American software vendor enlisted with maintaining and verifying voter rolls. Thereafter, posing as the vendor, the hackers successfully tricked government officials into downloading malicious software that creates a backdoor into the infected computer.

The Election Commission has made proactive attempts to improve the cyber hygiene of its officials by conducting national and regional cybersecurity workshops and issuing instructions regarding vigilance against phishing attacks. Furthermore, Cyber Security Regulations have been issued to regulate the officers’ online behaviour. A Chief Information Security Officer (CISO) was appointed in December 2017 at the central level and Cybersecurity Nodal Officers have been appointed in at the State-level.

The Election Commission has also addressed spoofing attempts by taking down imposter apps from mobile phone app distribution platforms. According to newspaper reports, the Election Commission has carried out a third-party security audit of all poll-related applications and websites, and enabled Secure Sockets Layer (SSL) on the Election Commission website to encrypt information exchanged between a user’s browser and the website.

There is no doubt that cybersecurity risks are constantly evolving, and it remains imperative for the Election Commission to conduct systematic and periodic vulnerability analyses in collaboration with security auditors to update Election Commission systems and software.

Electronic Voting Machines

An EVM is made up of two units – a Control Unit and a Balloting Unit, linked by a five-metre long cable. The Presiding/Polling Officer uses the Control Unit to release a ballot. This allows the voter inside the voting compartment to cast their vote on the Balloting Unit by pressing the button labelled with the candidate name and party symbol of their choice. An individual cannot vote multiple times as the machine is locked once a vote is recorded, and can be enabled again only when the Presiding Officer releases the ballot by pressing the relevant button on the Control Unit.

While the Election Commission has reiterated time and again that EVMs are tamper-proof, the machines have come under criticism from security researchers and computer scientists. To defend the integrity of EVMs, the Election Commission frequently cites the simplistic design of the machine. The EVMs are battery-operated in order to be functional in parts of the country that do not have electricity access. Additionally, they are not connected to any online networks nor do they contain wireless technology, thereby mitigating the possibility of remote software-based attacks. While these factors certainly reduce the potential for EVM hacking, they do not justify the Election Commission’s unshakeable belief that EVMs are infallible.

The most explosive demonstration of EVMs being susceptible to hacking attempts was carried out all the way back in 2010 by a Hyderabad-based technologist, Hari K. Prasad in collaboration with J. Alex Halderman, an American computer science professor and Rop Gonggrijp, a hacker who campaigned to decertify EVMs in the Netherlands.

Various personnel interact with the EVM, right from the beginning of the supply chain to the officials and staff responsible for its storage and security before and after polling. In a paper published by Hari Prasad and his team, two methods of physical tampering were tested and demonstrated. The first method is to replace the Control Unit’s display board which is used during the counting process to show the number of votes received by candidates. The dishonest display board, on receiving instructions via Bluetooth, would have the ability to intercept the vote totals and display fraudulent totals by adjusting the percentage of votes received by each candidate. The second method involves attaching a temporary clip-on device to the memory chip inside the EVM to execute a vote-stealing program in favour of a selected candidate.

The physical security of the EVM takes on manifold importance in light of the above. The Election Commission has strict procedures in place to transport and store the machines, employing GPS and surveillance technology. Storage spaces known as ‘strong rooms’ having a single-entry point, double lock system and CCTV coverage are utilised. However, there have been frequent news reports about cases of EVM theft, strong room blackouts as well as unauthorized access.

The Election Commission has argued that since mock polls are conducted before official polling commences, any malfunctions or tampering attempts will be detected before it can impact the electoral process. However, this countermeasure does not address the possibility of attackers programming their tampering devices to kick into gear only after the EVM has recorded a set number of votes, thereby skipping over any mock poll entries.

Furthermore, while the source-coding or the writing of the software onto the EVM chip is done by Indian public sector undertakings (PSUs), the microchips themselves are imported from the United States and Japan. Since the EVM chip is a one-time programmable chip, it can neither be read, copied nor overwritten. The benefit of this feature is that they cannot be re-programmed by malicious actors. However, the masking also has a downside – in the event that any vulnerabilities are inserted into the chip or source code during the movement of the machine components along the supply chain, it may not be possible to detect the vulnerability.

Introducing a Voter Verifiable Paper Audit Trail (VVPAT) system was widely touted as second layer of verification to catch any EVM malfunctions. It was only at the insistence of the Supreme Court that the Election Commission agreed to roll out EVMs with VVPATs for the ongoing General Elections.

When a vote is cast, the battery-operated VVPAT system prints a slip containing the serial number, name and symbol of the candidate, which is available for viewing through a transparent window for a few seconds. Following that, the slip falls into a sealed drop box.

An effective VVPAT audit is an important solution to the vulnerabilities plaguing EVMs. The Election Commission’s procedure for VVPAT audit involved counting of VVPAT slips in one polling booth per Assembly segment for the General Elections. The Supreme Court had to intervene again – at the insistence of Opposition parties – for the Election Commission to increase the audit from one EVM to five per Assembly segment. The Court did not accept the Opposition parties’ plea to have 33-50% votes verified.

The call for extensive VVPAT slip audits has been an ongoing battle, with bureaucrats, politicians and experts on the frontlines. Former bureaucrats had written to the Election Commission to increase the audit sample size to 50 machines per 1 lakh booths instead of 5-6 machines. A former Chief Election Commissioner has proposed that the two runners-up in a constituency may be the given the option to randomly select two EVMs each for a VVPAT slip audit – a procedure similar to the Umpire Decision Review System in cricket. Another proposed method known as the Risk Limiting Audit requires the ballots to be audited until a pre-determined statistical threshold of confidence is met.

The resistance displayed by the Election Commission to introducing VVPAT slip audits as well as expanding the sample size of the audits is alarming. The Chief Justice of India even reprimanded the Election Commission for “insulat[ing] itself from suggestion for improvement”. Unsurprisingly, the Court had to reassure the Election Commission that in making recommendations to improve the electoral process, it was not casting aspersions on the functioning of the body.

While it is commendable that the Election Commission has embraced the implementation of technology like EVMs in the electoral process, it is becoming clear that it has not incorporated the tradition of vulnerability research and software patching to prevent further exploits. Security researchers must be provided time and unfettered access to test the efficacy and security offered by EVMs. Hacking challenges should not be restricted to EVM replicas or superficial tinkering on the external body of the EVM.

It is understandable for an authority like the Election Commission to focus on protecting the integrity of the institution as well as the election infrastructure. However, pointing out flaws in the EVM technology is not equivalent to an attack on the institution of the Election Commission. While the entire process of elections is built around trust – be it trust in the method of casting votes or trust in the authority tabulating the votes – it is the responsibility of those in whom the trust of the electorate is reposed to ensure transparency at every stage and welcome public scrutiny, especially when new and complex technology is being employed.

(Varsha is a researcher with the Centre for Communication Governance at National Law University Delhi.)

WhatsApp’s message limit isn’t enough to halt the spread of fake news

A limit on forwarding messages has been extended from India to the rest of the world, but more needs to be done by all parties

This post first appeared on the NewScientist on February 7, 2019

WhatsApp took out adverts in India in a bid to counter fake news
Prakash Singh/AFP/Getty Images

In the US and Europe, Facebook stands accused of facilitating the spread of propaganda and fake news. In India, Facebook’s subsidiary WhatsApp is under the same pressure, charged with the spread of misinformation from political parties, and more dangerous material: last year, at least 35 people in India were killed by violent mobs incited by rumours of child abduction spread through WhatsApp.

How do we fight back? Following an outcry, and under government pressure in its biggest market, WhatsApp limited users in India to forwarding a single message just five times. Now the company has rolled out this limit globally.

The WhatsApp message limit came about after the Indian Information Technology minister, Ravi Shankar Prasad, threatened WhatsApp and other social media platforms with abetment to violence if they didn’t take adequate and prompt action in fighting the spread of misinformation.

Around 10 million people are connecting to the internet in India every month and, for many, it is their first interaction with people outside their immediate community and, more significantly, with mass media. With it, they encountered stories of child snatchers prowling the area, a common rumour. These were forwarded onto others because people believed them, and were scared. The app made it possible for the messages to spread far and wide in a very short time. As a result, angry mobs killed dozens of innocent men and women.

WhatsApp forwards

Whether limiting the number of contacts a message can be forwarded to is useful or not in WhatsApp’s fight against fake news is still up for debate. In the six months since this feature was rolled out in India, there are mixed reports of its success. The number of WhatsApp forwards has declined in India since the change was introduced, and representatives of various political parties have admitted that the limit on forwarding has affected their reach.

The effectiveness of WhatsApp’s change depends on the kind of misinformation we are aiming for it to curb. In terms of clickbaity stories that demand to be shared – even when their provenance is doubtful – the change has made the forwarding process more cumbersome and time-consuming, limiting their spread. That holds some promise in a world beset by viral “fake news”.

But concerted disinformation campaigns are likely to live on. Various political parties in India are already finding workarounds, including moving to other popular platforms and adding more people to their teams to maintain the scale.

A message forwarding limit will undoubtedly need to be coupled with other efforts. In meetings with senior WhatsApp officials, the Indian government asked the company to add a mechanism that could reveal the identity and location of a message’s author. Admirably, the company defended its users, and didn’t loosen the end-to-end encryption that protects user privacy and security.

There is a need for more public information from government agencies and other sources to counter scare stories on WhatsApp and other social media platforms. Still, it is likely that by limiting the speed at which these rumours can spread, the truth has been able to catch up in time to save lives.

A Landscape of Cyber Norms

By Geetha Hariharan

Less than a year ago, the United Nations Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (the ‘UN GGE’, for short) famously came to a deadlock in its determination of how international law applies to cyberspace. Comprising 25 states, the GGE was formed to debate the norms applicable to cyber activities in the international sphere – both the law as it stands today, as well as to recommend confidence-building measures amongst states. In 2013, the GGE’s historic pronouncement that international law applies to cyberspace changed the terms of the debate, opening up the question of how the law applies, and in what context.

Let us turn to 2017 to the 5th GGE. The issue concerned cyber warfare. A coalition of states, including the United States, wished the GGE to declare that the international law of war (jus ad bellum and jus in bello) applies to cyber warfare, and that both the inherent right of self-defence and the right to use countermeasures are applicable as well. However, certain other states, such as Cuba (and presumably, Russia and China), felt that such a declaration might lead to the “militarization of cyberspace”, and demurred. Thus, the GGE, which operates on principles of consensus, came to a deadlock, and for the first time since inception, dissolved without a report to show for its extensive deliberations.

This leaves cyberspace with massive lacunae in how international law operates. It is unclear how the norm-building discussions will go forward – and more importantly, where these discussions will be housed. Several suggestions have been raised, including an open-ended working group within the General Assembly, the constitution of a new GGE, and coalitions of similar-thinking states. While the way forward is far from clear, history has left us some examples to look to. But before we enter the how, we will explore the why of international norm-building.

International law is, after all, not a beast that affects our lives – or the lives of our states – on a daily basis, surely? We may wonder loud and angry at the use and effectiveness of international law in governing interstate relations. When, we may ask, has international law ever stopped a war, or recognized an international wrong, or been effective in stopping a state from doing wrong? The answers to these questions fall in the delicate space of international norm-building, and the ways in which states relate to each other.

States are autonomous creatures. Their very existence stems from their ability to be sovereign, to enter into independent, self-initiated relationships with other states, organisations and individuals. For instance, signing on to a bilateral or multilateral treaty is within a state’s prerogative and choice. As noted by the Permanent Court of International Justice in its famous 1927 S.S. Lotus decision, largely, states are free to do as they please, with the exception of some rules that are so universal that states cannot signal their disagreement with them, usually termed as jus cogens.

This, then, is the task of international law – to place boundaries upon the hubris of states to act as they please. It may do this in several ways; the Statute of the International Court of Justice recognizes four sources of international law. Limits may be placed upon state autonomy in the form of treaties, wherein states signal their express consent to norms laid down in the treaty. The Hague Conventions, which place limits on state action in times of war, or human rights instruments, which place duties and responsibilities on states vis-à-vis individuals and organisations, are examples. Of course, states may place reservations on their obligations under treaties, but as they are expressly done, it is a clear indication of the state of the law.

States may also accept limits on their autonomy in the form of customary international law. International custom comprises state acts that, consistently performed over an uninterrupted period of time, coalesce into legal norms. Custom must be accompanied by the belief that the rule is binding on states (called opinio juris). Take, for instance, the three-mile rule in the laws of the sea, where states exert their authority over three nautical miles outward into the sea. It reflects an international custom, as the practice of it is accompanied by opinio juris.

Well, now we know why an ‘international law of cyberspace’ is necessary; it is so we know what the states can and cannot do to each other and their citizens. The how of international law, however, is more complex. By now, it is well-documented that a cyberspace treaty is an imaginary beast. As international law stands today, there is far too little agreement to leave space for a cyberspace treaty. You could argue, of course, that it is too early for custom to develop, and you would be right. It took over two decades of state actions, followed by the International Law Commission’s surprising involvement, for the Law of the Sea to develop, and it is still uncertain that the treaty in entirety represents customary international law.

So how do we populate this open field? How should the international law of cyberspace develop? Of course, there are multiple ways, and states will no doubt offer their own suggestions. I offer two suggestions myself. The first is to get the International Law Commission involved. The ILC has decades of experience codifying international law (both primary and secondary). While a majority of its experience and success has been in the codification of secondary international law (‘rules about rules’, as Hart says), the ILC has also been instrumental in codifying the Law of the Sea, and in bringing to some semblance of coherence the rules on transboundary harm, and diplomatic and consular relations. Of course, primary rule codification by the ILC would most likely need to be confirmed by states in the form of a treaty or a convention, but we need not let that implausible eventuality stop us from our optimism over codification.

Not only this, but the ILC has already codified secondary rules of responsibility and attribution, which are no doubt crucial in cyber-related incidents. While the Tallinn Manual has done a tremendous job of transplanting the rules of attribution (among other primary rules) to cyberspace, we still need rules that are accepted expressly by states.

The second, and perhaps more plausible, suggestion is a form of active “I Spy”. Through their statements following major cyber incidents, states have already begun to give us a sense of what they consider international law boundaries to be. It is clear, from statements, that Russian interference in Estonia and Georgia constitute interferences with state sovereignty, while states have yet to expressly term the Stuxnet incident an act of use of force or an intervention. It is becoming clear that state influence on elections of another state using cyber means may constitute intervention, while the implications of that (countermeasures? threshold for the use of force?) are not yet clear. In sum, the crux of my second suggestion is this: Give it time, and keep an eye on state practice. This may be a space where publicists may genuinely make a difference, especially those with some influence on state apparatus.

Of course, a combination of methods to speed up norm-building will probably serve us best. Unlike nuclear power, we are as yet uncertain of the extent of cyber’s influence and impact. We are learning everyday: The Internet of Things has taught us that we now create a discrete home surveillance network with our gadgets, while Cambridge Analytica has shown that information about us is being used to manipulate our electoral choices. Stuxnet revealed the dangerous extent to which cyber can affect national security, while Estonia showed us that cyber operations are enough to stall a country in its tracks. Much like international law itself, we are still drawing the boundaries of cyber harms. And that is why any single norm-building method will not suffice: it will simply be too slow to keep pace with developments in cyber. And so, let us employ a multitude of methods. Within a year, the landscape of international law norms in cyber will look very different, and if we are observant, we can stay ahead of the developments, even as technology leads the way.

Geetha Hariharan is a Programme Officer at the Centre for Communication Governance at National Law University Delhi

A Liberating Law

The Right to Privacy has given us a forceful line of argument against stifling laws, but it needs a strong civic and political culture to work

By Ujwala Uppaluri

This post first appeared in India Today on August 11, 2018

Close on the heels of Independence Day last year, the Supreme Court told us that the Indian Constitution had always guaranteed our right to privacy. While the nine judges who made up the historic right to privacy court were unanimous, Justice S.A. Bobde proposed the only definition of privacy the court ventured in that case: It is the right to choose and to specify backed by cognitive freedom or the assurance of a zone of internal freedom in which to think.

Freedom needs privacy, said the court. It needs the quiet and the shadows. It is only when all Indians can fearlessly choose how they live and who they love that the hopes for freedom and human flourishing with which we began our journey as a democracy in 1947 can be realised. Without the capacity to think, read, write and play on our own and as we like, the freedoms — to express ourselves, to associate, to espouse or reject a religion or even to vote — that we take for granted in our democracy mean very little.

But independent India is not only a democracy; it is also a republic — a free state in which the people are paramount. In the concurring opinions of two judges in the privacy court, there are traces of a view that takes privacy to be intimately connected with our status as a republic.

Justice J. Chelameswar reminded us of the price we paid for the Constitution, which guarantees rights. It is a politically sacred instrument created by men and women who risked lives and sacrificed their liberties to fight alien rulers and secured freedom for our people, not only of their generation but generations to follow. And Justice A.M. Sapre told us that the reference to each individual’s dignity — which the court overwhelmingly agreed was protected by privacy — in the Constitution’s opening lines was an explicit repudiation of what people of this country had inherited from the past.


When these two views are taken together with Justice Bobde’s vision of privacy as cognitive freedom and free choice, the conclusion unavoidably is: privacy is the revolutionary idea that every Indian in independent India is entitled to choose her own destiny. Privacy is self-government. Privacy is non-domination — it is our label for the idea that Indians are not subservient to any power. It is swaraj, realised for each one of us who make up the free republic of India.

Today, as we commemorate our independence from the oppression of colonial rule, we must pause to consider the status of the right on which all our freedoms rest.

Declarations of rights are one thing, their realisation is quite another. Swaraj is a work in progress.

Legalising passive euthanasia and affirming the rights of an adult woman to choose her spouse were possible thanks to the Right to Privacy. The decriminalisation of homosexuality — which we have reason to hope for despite recent setbacks — will also likely invoke the right to privacy. The cases against adultery, marital rape and the all-surveilling edifice of Aadhaar lean heavily on the same principle. The declaration of the Right to Privacy has allowed us, as citizens, a renewed and powerful line of argument against bad laws and state action.

Outside the court, we are forced to confront the reality that the right to privacy is only as strong as the civic and political culture in which it must work. The reflexes and default setting of the Indian police and state of today have colonial antecedents. Our numerous intelligence agencies exist and operate in 2018, for the most part, in the same manner they did before 1947 — under a shroud of secrecy, without a duty to seek prior permission or to answer to our representatives in Parliament or the custodians of our rights in the courts for their actions. We, the people, have set no boundaries on their powers; so, we cannot complain when these powerful actors diverge from the roles they ought to play in a healthy democracy.

It is the same story with communications: the colonial legal architecture for intercepting and monitoring our communications endures. The law that exists to address wiretapping is rooted in the Indian Telegraph Act, 1885, and rules framed under it, on the prodding of the Supreme Court, in 1996. They permit wide grounds for surveillance. And by setting out a procedure through which it is hoped that the same arm of government that surveils also checks itself, whatever safeguards exist are rendered illusory and ineffective. Rules formulated under the Information Technology Act, 2000, to regulate our communications online are framed in the same spirit.

There is no denying the fact that surveillance has a role in maintaining peace and stability in democracies like ours. It is a vital weapon in the state’s arsenal against threats to national security and in the investigation of crimes. The problem, rather, is that long-settled defaults are changing, while we do little to understand or correct their effects. The defaults for record-keeping, for example, have shifted from deliberate forgetting to universal and permanent remembering. And with social media, communication that would have been private and transitory is now recorded and publicly visible. So, while a shadowy surveillance edifice turns even more opaque to us, we, the citizens, become ever more exposed to the state. It should be the other way round in a republic worth the name: the state and all those in power must be transparent to citizens, who must be left unmolested in their privacy.

Surveillance and censorship each breed more of the other. Through laws like sedition, which criminalise speech, successive governments have justified policing what we say. Equally, by using the newly proposed measures for social media monitoring, for which the UIDAI (Unique Identification Authority of India) has issued tenders this year, we will find ourselves increasingly fearful and inhibited in expressing ourselves online. A similar, much broader programme, under the aegis of the ministry of information and broadcasting, was withdrawn on August 3 after the Supreme Court agreed to hear the citizen’s case against it, remarking that it seemed a dangerous proposition.

The push towards digital governance and the hasty adoption of technologies before we have fully understood their implications have the effect of creating — and delivering to the State — rich new streams of personal information. As the State blunders along in this act first, think later fashion, our very bodies are becoming sites for extraction of information. Colonial laws like the Identification of Prisoners Act, 1920, specify categories of convicts who must allow their measurements and photographs to be recorded. But even this law requires these records to be destroyed when the convicts are released. Today, under the aegis of Aadhaar, the government systematically collects biometrics of all Indians, defined for the present as photographs, fingerprints and iris scans. On the cards, during this session of Parliament, is a proposed law that will enable DNA profiling.

It should not surprise us that the government chose to argue against the very existence of a constitutional right to privacy in India. This resistance to checks is to be expected of any beneficiary of a large well of power, no matter that it is derived from the very citizens whose interests are ignored. Late last month, a committee of experts under Justice Srikrishna released a report and draft bill that would set the terms for how the state — as well as corporations — treat personal information, including our biometrics. It did so after ignoring calls to include in the panel citizens’ representatives, who were refused access to the deliberations. Today, let us remind ourselves that republics and democracies are fragile, that they are not self-sustaining. The price for the freedoms we, the people, enjoy is constant vigilance and continuous participation in democratic processes. Let’s start with the battle for a data protection law that would be worthy of the name.

Ujwala Uppaluri is a constitutional lawyer and a Fellow at the Centre for Communication Governance at National Law University Delhi