Linking PAN with Aadhaar – Update from the Supreme Court Hearing (Day – II)

The petitioners resumed their arguments in the case challenging the constitutionality of Section 139AA of the Income Tax Act (‘IT Act’). This provision mandates individuals to link their Permanent Account Numbers (PAN) with their Aadhaar number. The background to the case and our report from yesterday’s hearing can be found here.

The counsel for Maj. Gen. Vombatkere and Mr. Bezwada Wilson continued his arguments today. Today’s hearing commenced with pointing out that despite the government’s assurances, the Aadhaar framework was extremely porus and ineffective. That 34,000 enrolment agencies have been blacklisted was pointed out to substantiate this claim. It was also pointed out that biometric technology itself is fallible – instances of a hacker successfully copying the German Defence Minister’s fingerprints and Angela Merkel’s iris scans were also pointed out. Further, replies to RTI queries showed that more than eighty-five lakh Aadhaar numbers had been de-activated due to biometric and demographic errors discovered later. The counsel also apprised the bench of Aadhaar numbers being issued to dogs, trees, chairs and ‘Coriander s/o pulao’. Pertinently, he outlined concerns about Aadhaar data being leaked by several state and central government portals. These facts were brought to the Court’s attention to emphasise that the petitioners were conscientious objectors to the Aadhaar project and had serious apprehensions regarding its robustness, among other issues.

With the court’s permission, the counsel then revisited the Supreme Court’s interim orders in the main challenge to Aadhaar. He pointed out that due to the reference order, and the difficulties associated with forming a large bench, the case had essentially been ‘canned’ or ‘put in deep freeze’. However, he emphasised that at each stage, judges had been conscious of the gravity of issues involved and had consequently protected citizens with strong interim orders. This was done to protect individuals’ interest and prevent a situation of fait accompli. Particularly with respect to the order passed on 15 October 2015, it was pointed out that the Central Board of Direct Taxes was also a party before the Court in that interim order.

The counsel highlighted the graveness of the issues involved. If allowed, Aadhaar was likely to fundamentally alter the relationship between the citizen and the state and put every citizen on an ‘electronic leash’, enabling real-time surveillance. The Attorney General objected to this line of argument, contending that issues of privacy must not be raised in this case, in light of the pending reference.

On the issue of Section 139AA specifically, it was prayed that the provision must either be struck down or read down to make it voluntary. The submission was that the scheme of the Aadhaar Act was purely voluntary – it created a right to enrol for Aadhaar, but imposed no duty to do so. Reading sections 3 and 7 of the Act, he argued that in addition to being purely voluntary, the only detriment could be the denial of a benefit or service. Since paying tax was neither, one could not be compelled to part with their biometrics. He also argued that the Act and the enrolment process contemplated free and informed consent. When viewed in this light, the mandatory nature of Section 139AA was in direct collision with the scheme of the Aadhaar Act. Justice Bhushan interjected pointing out that the scheme of both statutes was different, and that made the legislation under challenge permissible. To this, it was submitted that a voluntary scheme could not be grafted onto the IT Act as a mandatory provision.

To support this contention further, it was contended that converting a right into a duty amounted to a colourable exercise of legislative power. Further, being coerced to enrol for a scheme that is essentially voluntary negates consent, rendering the legislation unworkable.

The petitioners’ counsel then apprised the Court regarding the penal consequences that would arise in the event of non-compliance with Section 139AA. These included higher Tax Deducted at Source (TDS), a penalty for failure to furnish income and a fine of Rs. 10,000 for not possessing a PAN card. Further, disabilities associated with not having a PAN under Section 114B were reiterated. On being queried by the bench whether PAN being mandated was the same as any other alternative (such as Aadhaar), it was argued that Aadhaar was intrusive to an unprecedented level. PAN, on the other hand, was not intrusive to one’s body. Routine transactions such as opening a bank account or purchasing a motor vehicle should not me made contingent on parting with biometrics.

The bench also quizzed the petitioners on the issue of giving up biometrics for passports and other similar circumstances. The counsel responded distinguishing such circumstances, arguing that in certain limited situations, such identification might be legitimate and necessary. This would be different from Aadhaar, where biometric authentication would become ubiquitous. Other circumstances would include identifying prisoners. Additionally, such information would be stored locally and only used for a limited purpose.

It was also argued that under Section 30, the Aadhaar Act itself defined biometrics as sensitive ‘personal’ data or information. Reference was also made to Section 43A of the Information Technology Act to emphasise that ‘personal’ information is of, and belongs to a person. Being intimate parts of the body, biometrics could not be considered a dominion of the state. Reliance was also placed on ‘On Jurisprudence’ by Salmond and the Oxford Handbook on Jurisprudence and Philosophy of Law to highlight that the right of bodily integrity included ‘exclusive possession and use of his or her own body as against everyone else’. The petitioner’s counsel went on the state that the Indian Constitution does not establish a totalitarian state but creates a state that is respectful of individual liberty and freedoms. Drawing from the preamble and emphasizing on the idea of a limited government, he said that the Indian Constitution is ‘not a charter of servitude’.

The petitioner’s counsel took the Court through a host of judgments for the proposition that the right to life extended to a right to protect one’s body and identity from harm. He cited landmark judgments such as National Legal Service Authority v. Union of India and Others (2014) 5 SCC 438 and Sunil Batra v. Delhi Administration and Others (1978) 4 SCC 494 to emphasise that personal liberty went beyond mere animal existence. He also touched on the context of bodily integrity, informed consent and self-determination as essential facets of Article 21, read with Articles 14 and 19.

As a final argument, it was argued that the object of the statute itself was discriminatory. Section 139AA discriminates between a homogenous class of assessees – individuals willing to part with biometrics to enrol for Aadhaar and those who’re unwilling to do so. The provision unreasonably discriminates against the latter by subjecting them to grave penal consequences, given that both categories are willing tax payers. A voluntary scheme could not, and should not deprive individuals of their choice.

The petitioners’ counsel also briefly touched upon the concept of ‘informational self-determination’ and argued that the state did not have any imminent domain regarding one’s body. Parting of sensitive features such as biometrics should be subject to one’s control and consent.

Arguments on behalf of the petitioners are likely to be concluded tomorrow. The Attorney General, representing the Central Government, is likely to advance arguments on Tuesday (May 2).

Linking PAN with Aadhaar – Update from the Supreme Court Hearing

The petitions challenging the constitutionality of Section 139AA of the Income Tax Act (‘IT Act’) came up for hearing before the Supreme Court today. Section 139AA makes it mandatory for individuals to quote their Aadhaar number when applying for a Permanent Account Number (PAN) as well as to file income tax returns. Section 139AA(2) stipulates that failure to intimate the Aadhaar number would automatically invalidate one’s existing PAN. Further, not linking the two also creates a legal fiction by which the provisions of the Act would apply ‘as if the person had not applied for the allotment of the PAN’.

Passed in March 2017, this IT Act amendment comes in the backdrop of several pending petitions challenging the constitutional validity of the Aadhaar project. Since 2013, the Supreme Court has passed multiple interim orders directing that Aadhaar remain purely voluntary till the pendency of these petitions. In August 2015, the Supreme Court had referred these matters to a larger bench, citing ambiguity in the Court’s jurisprudence on the fundamental right to privacy. Almost two years later, this bench remains to be constituted.

This background is pertinent given that the petitioners were forced to limit their submissions to the issue of linking Aadhaar numbers with PAN. The pending reference effectively ended up acting as a bar to assail the IT Act provision on the basis of privacy issues associated with Aadhaar.

The challenge constitutes of two separate writ petitions, filed by different sets of petitioners. The first petition has been filed by Mr. Binoy Viswam and the second by (Retd). Maj. Gen. Vombatkere and Mr. Bezwada Wilson, who are also petitioners in the main challenge to the Aadhaar project.

Arguments Advanced on behalf of Mr. Viswam:

The counsel for Mr. Viswam primarily advanced two arguments – that Section 139AA is unconstitutional as it violates and overrules several orders of the Supreme Court, and that it infringes Article 14 of the Constitution.

  • Section 139AA violates orders of the Supreme Court that were based on the Central Government’s undertakings – It was argued that by way of successive interim orders, such as on 23 September 2013, 11 August 2015, 15 October 2015 and 14 September 2016, the Court had directed that Aadhaar remain purely voluntary. The 15 October 2015 order specifically states that the Aadhaar scheme must remain voluntary ‘till the matter is finally decided by this Court one way or another’. Reference was also made to the Supreme Court’s recent order in Lokniti Foundation v. Union of India (6 February 2017), where it recorded the government’s submission that Aadhaar was voluntary for obtaining a telephone connection.

Based on these facts, it was argued that no one could be deprived of a service or benefit for want of an Aadhaar number. The bench observed that such undertakings could not preclude the Parliament from passing a law (such as that in issue). To this, the counsel responded stating that the state could not indirectly make Aadhaar mandatory under the IT Act while it remained voluntary under the parent Act, that is – the Aadhaar (Targeted Delivery of Financial and Other Subsidies, Benefits and Services) Act, 2016 (‘Aadhaar Act’). Section 7 of the Act, which stipulates that no benefit or service can be denied on the ground of not having an Aadhaar number, was read to support this contention.

Additionally, it was argued that these interim orders granted individuals a right – the right to not enroll for Aadhaar. This could not be taken away till the decided the matter conclusively. The counsel buttressed this contention citing precedent – Madan Mohan Pathak v. Union of India (1978) 2 SCC 50 and Indian Aluminium Co. v. State of Kerala (1996) 7 SCC 637, amongst others.

  • Section 139AA is violative of Article 14 because it unreasonably targets individuals out of all categories of assessees and for bring arbitrary – Besides individuals, entities such as companies, partnership firms, co-operative societies and trusts are also required to file returns on their income. As a result, it was argued that Article 139AA discriminated between assessees as it only adversely impacted individuals.

The argument advanced was that the twin test of permissible classification – that the classification is not based on an intelligible differentia, and that the differentia has no rational relation to the object sought to be achieved by the provision, were not satisfied in the present case. Given that the object of linking the Aadhaar and PAN is to address black money and weed out fakes, the provision fails to address nefarious activities carried through companies or trusts. This argument was later also advanced on the ground of proportionality.

This was met with some resistance by the bench, which observed that there was a natural distinction between natural and juristic persons and a distinction on that ground was not, discriminatory per se. It went on to observe that the law could be treated as a mere replacement of one system by another. The larger challenge to Aadhaar aside, it was difficult to assail the validity of a new or reformed system on this ground alone.

Counsel for Mr. Viswam then attempted to show that the provision is unconstitutional for being arbitrary. He submitted that such an unreasonable classification virtually resulted in ‘civil death’ for an individual. As per Section 114B of the IT Act, a PAN is necessary for eighteen essential transactions including opening a bank account and purchasing shares. If invalidated under Section 139AA(2), it would unreasonably restrict the right of individuals to carry on their business, as opposed to entities such as companies. Further, the individual would also be subject to additional penalties due to the legal fiction by which an individual is assumed never to have applied for a PAN. It was argued that this retrospective effect was highly prejudicial to individual assessees.

This was immediately objected to by the Attorney General, who argued that the legal fiction was not intended to be retrospective and would only apply from 1 July 2017, which is the deadline for linking Aadhaar with PAN.

To conclude, it was submitted that as per the data published by the Unique Identification Authority of India, enrolment rates for several states such were above 100%. This data was used to show that the problem of fakes or bogus cards existed even under the Aadhaar regime.

Post lunch, the counsel for (Retd.) Maj. Gen. Vombatkere and Mr. Wilson began arguments. He outlined his prayers before the Court – that Section 139AA be struck down, or alternatively, be read down to make Aadhaar a voluntary requirement.

As a preliminary point, he argued that neither of the petitioners had enrolled for Aadhaar. Being parties to the petitions challenging the Aadhaar project, they were conscientious objectors to the entire scheme. He proceeded to outline the broader issues in the challenge, with a view to provide the judges with some context. His submission rested on the idea of autonomy and bodily integrity associated with biometrics such as fingerprints and iris scans. Arguing that the Indian Constitution guaranteed a free, democratic society, he stated that the state had no dominion over core elements on one’s body and identity. As a result, the state could not coerce citizens to part with this information as a condition or as a bargain for enjoying rights guaranteed to them.

The arguments on behalf of (Retd.) Maj. Gen. Vombatkere and Mr. Wilson will continue on April 27 and will build on the specific issue of Article 139AA. The Attorney General is also likely to make his submissions on behalf of the Union Government.

How (not) to get away with murder: Reviewing Facebook’s live streaming guidelines

Introduction

The recent shooting in Cleveland live streamed on Facebook has brought the social media company’s regulatory responsibilities into question. Since the launch of Facebook Live in 2016, the service’s role in raising political awareness has been acknowledged. However, the service has also been used to broadcast several instances of graphic violence.

The streaming of violent content (including instances of suicide, murders and gang rapes) has raised serious questions about Facebook’s responsibility as an intermediary. While it is not technically feasible for Facebook to review all live videos while they’re being streamed or filter them before they’re streamed, the platform does have a routine procedure in place to take down such content. This post will visit the guidelines in place to take down live streamed content and discuss alternatives to the existing reporting mechanism.

What guidelines are in place?

Facebook has ‘community standards’ in place.  However, their internal regulation methods are unknown to the public. Live videos have to be in compliance with ‘community standards’, which specifies that Facebook will remove content relating to ‘direct threats’, self-injury’, ‘dangerous organizations’, ‘bullying and harassment’, ‘attacks on public figures’, ‘criminal activity’ and ‘sexual violence and exploitation’.

The company has stated that it ‘only takes one report for something to be reviewed’.  This system of review has been criticized since graphic content could go unnoticed without a report. In addition, this form of reporting would be unsuccessful since there is no mandate of ‘compulsory reporting’ for the viewers.  Incidentally, the Cleveland shooting video was not detected by Facebook until it was flagged as ‘offensive’, which was a couple of hours after the incident. The company has also stated that they are working on developing ‘artificial intelligence’ that could help put an end to these broadcasts. However, they currently rely on the reporting mechanism, where ‘thousands of people around the world’ review posts that have been reported against. The reviewers check if the content goes against the ‘community standards’ and ‘prioritize videos with serious safety implications’.

While deciding if a video should be taken down, the reviewers will also take the ‘context and degree’ of the content into consideration. For instance, content that is aimed at ‘raising awareness’, even if it displays violence, will be allowed. However, content that is celebrating such violence would be taken down. To demonstrate, when a live video of civilian Philando Castile being shot by a police officer in Minnesota went viral, Facebook kept the video up on their platform, stating that it did not glorify the violent act.

 Regulation

Other than the internal guidelines by which Facebook regulates itself, there haven’t been instances of government regulators, like the United States’ Federal Communications Commission intervening. Unlike the realm of television, where the FCC regulates content and deems material ‘inappropriate’, social media websites are protected from content regulation.

This brings up the question of intermediary liability and Facebook’s liability for hosting graphic content. Under American Law, there is a distinction between ‘publishers’ and ‘common carriers’. A common carrier only ‘enables communications’ and does not ‘publish content’. If a platform edits content, it is most likely a publisher. A ‘publisher’ has a higher level of responsibility for content hosted on their platform, unlike a ‘carrier’. In most instances, social media companies are covered under Section 230 of the Communications Decency Act, a safe harbor provision, by which they would not be held liable for third-party content.  However, questions have been raised about Facebook’s role as a ‘publisher’ or ‘common carrier’, and there seems to be no conclusive answer.

Conclusion

Several experts have considered possible solutions to this growing problem. Some believe that such features should be limited to certain partners and should be opened up to the public once additional safeguards and better artificial intelligence technologies are in place. In these precarious situations, enforcing stricter laws on intermediaries might not resolve the issue at hand. Some jurisdictions have ‘mandatory reporting’ provisions, specifically for crimes of sexual assault. In India, under Section 19 of the Protection of Children from Sexual Offences Act, 2012 ‘any person who has apprehension that an offence…is likely to be committed or has knowledge that such an offence has been committed’ has to report such an offence. In the context of cyber-crimes, this system of ‘mandatory reporting’ would shift the onus on the viewers and supplement the existing reporting system. Mandatory provisions of this nature do not exist in the United States where most of the larger social media companies are based.

Similarly, possible solutions should focus on strengthening the existing reporting system, rather than holding social media platforms liable.

Supreme Court hears the WhatsApp-Facebook Data Sharing Case

The special leave petition against the Delhi High Court’s ruling upholding WhatsApp’s updated privacy policy came up for hearing before a bench of five judges today. This policy is contentious because it allows WhatsApp to share valuable personal information of its users, including phone numbers, contact lists and profile pictures, with its parent company Facebook. As reported earlier, today’s hearing was to determine whether a bench of five judges can hear the case and to fix a date for the hearing.

During the course of arguments, the petitioners’ counsel focused on the need for regulating platforms such as WhatsApp and Facebook. It was his contention that the High Court should not have relegated the matter to a simple issue of a private contract between a user and a company. Arguing that such a proposition was overbroad, he contended that the Telecom Regulatory Authority of India and/or the Central Government must form comprehensive regulations to guarantee the rights of individuals (including the right to privacy) using such services. To emphasise on the need for regulation, he took the court through WhatsApp’s privacy policy, outlining the nature and extent of information collected and shared by it.

The Attorney General interjected requesting the Court to adjourn the matter for a few months. It was his submission that the Centre was in the process of formulating a data protection framework to regulate private entities collecting personal data.

For WhatsApp, it was argued that the service is popular only because of its insistence on privacy. It was submitted that no part of the content of any message was shared with any third party. The counsels for WhatsApp and Facebook also questioned the setting up of a constitution bench to hear this case as according to them, the case lacked a question of constitutional importance.

In the order passed by the Court today, this opposition to setting up a bench of five judges to hear the matter was recorded. However, Justice Mishra went on to state that this contention would be addressed at the time of the final verdict, signifying the Court’s intention to go ahead with the matter. The petitioners have been requested to file their propositions (/questions of law) by the 24th of April 2017. The case will be taken up for hearing again on the 27th of April 2017.

Reviewing the Law Commission’s latest hate speech recommendations

Introduction

The Law Commission has recently released a report on hate speech laws in India. The Supreme Court in Pravasi Bhalai vs. Union of India  asked the Law Commission to recommend changes to existing hate speech laws, and to “define the term hate speech”. The report discusses the history of hate speech jurisprudence in India and in certain other jurisdictions. In addition, it stresses upon the difficulty of defining hate speech and the lack of a concise definition. In the absence of such a definition, certain ‘identifying criterion’ have been mentioned, to detect instances of hate speech. It also discusses the theories of Jeremy Waldron (the ‘dignity’ principle) and makes a case for protecting the interests of minority communities by regulating speech. In this regard, two new sections for the IPC have been proposed. They are as follows:

(i) Prohibiting incitement to hatred-

“153 C. Whoever on grounds of religion, race, caste or community, sex, gender identity, sexual orientation, place of birth, residence, language, disability or tribe –

(a)  uses gravely threatening words either spoken or written, signs, visible representations within the hearing or sight of a person with the intention to cause, fear or alarm; or

(b)  advocates hatred by words either spoken or written, signs, visible representations, that causes incitement to violence shall be punishable with imprisonment of either description for a term which may extend to two years, and fine up to Rs 5000, or with both.”.

(ii) Causing fear, alarm, or provocation of violence in certain cases.

“505 A. Whoever in public intentionally on grounds of religion, race, caste or community, sex, gender, sexual orientation, place of birth, residence, language, disability or tribe-

uses words, or displays any writing, sign, or other visible representation which is gravely threatening, or derogatory;

(i) within the hearing or sight of a person, causing fear or alarm, or;

(ii) with the intent to provoke the use of unlawful violence,

against that person or another, shall be punished with imprisonment for a term which may extend to one year and/or fine up to Rs 5000, or both”.

The author is of the opinion that these recommended amendments are vague and broadly worded and could lead to a chilling effect and over-censorship. Here are a few reasons why the recommendations might not be compatible with free speech jurisprudence:

  1. Three – part test

Article 10 of the European Convention on Human Rights lays down three requirements that need be fulfilled to ensure that a restriction on free speech is warranted. The Law Commission report also discusses this test; it includes the necessity of a measure being ‘prescribed by law’, the need for a ‘legitimate aim’ and the test of ‘necessity and proportionality’.

Under the ‘prescribed by law’ standard, it is necessary for a restriction on free speech to be ‘clear and not ambiguous’. For instance, a phrase like ‘fear or alarm’ (existing in Section 153A and Section 505) has been criticized for being ‘vague’. Without defining or restricting this term, the public would not be aware of what constitutes ‘fear or alarm’ and would not know how to comply with the law. This standard has also been reiterated in Shreya Singhal vs. Union of India, where it was held that the ambiguously worded Section 66A could be problematic for innocent people since they would not be aware as to “which side of the line they fall” towards.

  1. Expanding scope to online offences?

The newly proposed sections also mention that any ‘gravely threatening words within the hearing or sight of a person’ would be penalized. Presumably, the phrase ‘within the sight or hearing of a person’ broadens the scope of this provision and could allow online speech to come under the ambit of the IPC. This phrase is similar to the wording of Section 5 (1) of the Criminal Justice (Public Order) Act, 1986[1] in the United Kingdom, which penalizes “harassment, alarm or distress”. Even though the section does not explicitly mention that it would cover offences on the internet, it has been presumed to do so.[2]

Similarly, if the intent of the framers of Section 153C is to expand the scope to cover online offences, it might introduce the same issues as the omitted Section 66A of the IT Act did. Section 66A intended to penalize the transmission of information which was ‘menacing’ and also which promoted ‘hatred or ill will’. The over-breadth of the terms in the section led to scrapping it. Another reason for scrapping the section was the lowering of the ‘incitement’ threshold (discussed below). Even though the proposed Section 153C does not provide for as many grounds (hatred, ill will, annoyance, etc.), it does explicitly lower the threshold from ‘incitement’ to ‘fear or alarm’/’discrimination’.

  1. The standard of ‘hate speech’

 The report also advocates for penalizing the ‘fear or alarm’ caused by such speech, since it could potentially have the effect of ‘marginalizing a section of the society’. As mentioned above, it has been explicitly mentioned that the threshold of ‘incitement to violence’ should be lowered and factors like ‘incitement to discrimination’ should also be considered.

The Shreya Singhal judgment drew a distinction between ‘discussion, advocacy and incitement’, stating that a restriction justifiable under Article 19(1) (a) of the Constitution would have to amount to ‘incitement’ and not merely ‘discussion’ or ‘advocacy’. This distinction was drawn so that discussing or advocating ideas which could lead to problems with ‘public order’ or disturbing the ‘security of the state’ could be differentiated from ‘incitement’ which establishes more of a ‘causal connection’.

Similarly, if the words used contribute to causing ‘fear or alarm’, the threshold of ‘incitement’ would be lowered, and constitutionally protected speech could be censored.

Conclusion

Despite the shortcomings mentioned above, the report is positive in a few ways. It draws attention to important contemporary issues affecting minority communities and how speech is often used to mobilize communities against each other. It also relies on Jeremy Waldron’s ‘dignity principle’ to make a case for imposing differing hate speech standards to protect minority communities. In addition, the grounds for discrimination now include ‘tribe’ and ‘sexual orientation’ amongst others.

However, existing case laws, coupled with recent instances of censorship, could make the insertion of these provisions troubling. India’s relationship with free speech is already dire; the Press Freedom Index ranks the country at 133 (out of 180) and the Freedom on the Net Report states that India is ‘partly free’ in this regard. The Law Commission might need to reconsider the recommendations, for the sake of upholding free speech. Pravasi Bhalai called for sanctioning politicians speeches, but the recommendations made by the Law Commission might be far reaching and the effects could be chilling.

 

[1] Section 5- Harassment, alarm or distress.
(1)A person is guilty of an offence if he—
(a)uses threatening or abusive words or behaviour, or disorderly behaviour, or
(b)displays any writing, sign or other visible representation which is threatening or abusive,
within the hearing or sight of a person likely to be caused harassment, alarm or distress thereby.

[2] David Wall, Cybercrime: The Transformation of Crime in the Information Age, Page 123, Polity.

Update from the Supreme Court hearing in the WhatsApp-Facebook Data Sharing Case

In September last year, the Delhi High Court had upheld WhatsApp’s updated privacy policy, which allows it to share users’ personal information with its parent company, Facebook. Aggrieved by the Court’s decision, the petitioners approached the Supreme Court earlier this year.

On 6th February, the Supreme Court had fixed 12th May as the date for final adjudication of this case. This was one of the three cases listed for hearing before a constitution bench during the Court’s summer vacation. During today’s hearing, the counsel for WhatsApp sought a fresh date citing his unavailability in May. The bench, comprising of the Chief Justice of India and Justice D.Y. Chandrachud, observed that it would be inappropriate for them to reschedule the date as it was ultimately going to be heard and decided by another bench of the Supreme Court.

This raised the question whether the matter was required to be heard by a constitution bench at all. It was argued on behalf of WhatsApp that the case was a simple contractual matter and needn’t be referred to a larger bench. On the other hand, Facebook’s counsel contended that if the petitioners intended to pursue their claim based on a fundamental right to privacy under Article 21, the case could not proceed in light of the pending constitutional reference. (The question of whether a fundamental right to privacy exists, and its scope was referred to a larger bench in 2015).

The petitioners’ counsel contended that the privacy claim in this case arose from Article 19(1)(a), as the ability to communicate and speak freely was an inherent aspect of privacy. As a result, the pending constitutional reference should not be considered a bar for this case to proceed.

The bench reiterated its discomfort with deciding any of these issues. The case has now been listed for hearing before a constitution bench on 18th April to determine if it can be heard by five judges, and for fixing the date for hearing.

Gaps in the Protection of Critical Information Infrastructures in India

 

Sowmya Karun

In a previous post, we critically examined the legislative and institutional framework for the protection of critical information infrastructures (“CIIs”) in India. The National Critical Information Infrastructure Protection Centre (“NCIIPC”) has since revamped its website. This is a heartening move towards transparency and increased public engagement. According to “data shared by a broad range of NCIIPC constituencies”, a total of around 7.5 million incidents, threat feeds and vulnerabilities were reported from CIIs in 2016. This was reported in the first NCIIPC newsletter. The newsletter further indicated the number of incidents on a monthly basis, the most impacted cities in terms of the volume of cyber attack incidents, the major forms of attacks and the country-wise share of the origins of cyber attacks on CIIs.

Sector-specific guidelines for CII

As recommended in our post, the NCIIPC has been increasing its efforts towards the protection of sectoral critical information infrastructures. There has been particular emphasis on designing sector-specific cyber security practices. For instance, the Ministry of Power has taken steps to sensitize critical organisations in the power sector, following instructions received from the NCIIPC.  This includes the auditing of underlying information infrastructures by CERT-In accredited agencies and the setting up of a dedicated computer emergency response team for the transmission sector. This is especially significant as the deployment of smart grid technologies on a large scale poses unique cybersecurity risks.  Similarly, it has been reported that the NCIIPC has conducted workshops on cyber security and critical information infrastructure protection for the oil and gas industry. Further, the NCIIPC has engaged with the Chief Information Security Officers in strategic and public enterprises (which includes heavy industry and public sector units) for the identification, protection and notification of their CII.

However, the NCIIPC’s attempts to address the claim that there is “an urgent need to evolve sector specific guidelines for handling cyber crises” remain piecemeal and reactive. The transportation sector, for instance, is particularly vulnerable to cyber threats on account of growing dependencies on network based systems for navigation, tracking and positioning, amongst others. There have been reports of Pakistani hackers who have been tapping into air traffic control systems in Jammu as well as gaining access to the GPS data of police vehicles in Madhya Pradesh. These instances represent only the tip of the iceberg when it comes to the capacity of malicious actors to disrupt CIIs in the transportation sector. However, despite these incidents, there have been no reports about the development of sector specific cybersecurity guidelines for the transportation sector. The Long Range Identification and Tracking (LRIT) system under the Ministry of Shipping remains the only transportation to be declared as a “protected system” under the Information Technology Act, 2000.

Information Sharing and Analysis

While the development of sector specific guidelines for cyber-security should continue, it is also necessary to focus on mechanisms for information sharing and analysis across sectors. Information sharing about vulnerabilities, threats and attacks is essential as security solutions cannot be built without shared threat intelligence or co-ordinated responses. In recognition of this, the National Cybersecurity Policy of 2013 (“the Policy”) noted the necessity of establishing a mechanism for sharing information on cyber-security incidents (Paragraph IV(A)(7)). The creation of such a mechanism, according to the Policy, will generate the necessary understanding of  existing and potential threats to enable timely information sharing (Paragraph IV(E)(1)). Prior to the policy, the creation of Information Sharing & Analysis Centres (ISACs) had also been recommended by the Joint Working Group on Engagement with Private Sector on Cyber Security (“JWG”). The JWG conceived of ISACs within various industry verticals with the private sector to co-ordinate with sectoral CERTs as well as CERT-IN.

However, currently, institutional mechanisms for streamlined and prompt sharing of information are not in place for most sectors. An ISAC has been set up at the Institute for Development and Research in Banking Technology (IDRBT), but it remains restricted to financial services. While the Central Government stated that action was initiated in 2014 for the setting up of similar ISACs in the power and petroleum sector, there is no confirmation of the same in the public domain. There is also no clarity on whether ISACs will be instituted for other sectors. There have also been no reports dealing with steps that are being taken for cross-sectoral information sharing and analysis. In a move forward, the NCIIPC has now made available forms for reporting vulnerabilities as well as cyber attacks on critical information infrastructures on its website. However, this is a poor substitute for a mandatory and systematic mechanism for collating information on threats, vulnerabilities and attacks. To this end, it is essential to urgently initiate the setting up of sectoral ISACs, under the guidance of the NCIIPC. A cross-sectoral ISAC, modelled along the along the lines of the National Council of ISACs in the US, could grow to function as an effective platform. It may also prove to be useful to pursue collaborations with existing global information sharing networks (such as the Financial Services Information Sharing and Analysis Centre (FS-ISAC). Similarly, the merging of sectoral platforms to create a collaborative intelligence sharing platform under the National Cybersecurity Co-ordination Centre is recommended.

Conclusion

To conclude, it is heartening to observe the progressive changes the NCIIPC has made as well as the creation of sector specific guidelines in certain sectors. However, this must progress across various critical sectors in addition to being placed within broader information sharing mechanisms. It is hoped that the NCIIPC will continue on the path of transparency and information sharing in this regard.