Google Faces Legal Hurdles Under Brazilian Internet Law

By Raissa Campagnaro[1]

The Brazilian Federal Prosecution Ministry has brought civil proceedings against Google for flouting its data protection law. The suit challenges Google’s access to the content of emails exchanged by Gmail users on multiple grounds, including Google’s failure to obtain express consent.

In October, 2016, Brazil’s Federal Prosecutor filed a public civil suit against Google, claiming that the search engine had failed to comply with the country’s internet law, the Internet Bill of Rights. The suit argues that during a previous prosecution investigation, through a civil inquiry, Google had made it public that it scans the content of emails exchanged by Gmail users. According to the Federal Prosecutor, this violates Brazilian data protection standards.

The Internet Bill of Rights establishes data protection principles similar to those set up under the EU Data Protection Directive 95/46/EC. Under this law, any processing of data must be pursuant to express consent. The law specifically requires that the clause seeking consent be prominently displayed and easy to identify amongst other terms of the contract. The law also recognises a right to not have one’s data transferred to third parties without consent and a right to be informed about the specific purposes of the personal data collection, usage, storage, treatment and protection.

When asked about its compliance with the legislation, Google submitted that it analyses the email messages so it can improve consumers’ user experience by filtering the messages for unwanted content, spam, or other kind of malware. It also submitted that the scanning of messages is used to offer products and advertisement for the user and to classify emails into various categories such as ‘social’ ‘promotions’ etc. Finally, Google has contended that the scanning of emails is  consented to by the user at the time of signing up, by agreeing to the privacy policy within Gmail’s terms of service.

However, the Federal Prosecution Ministry considers these practices to be ‘profiling’ – a consequence of personal data aggregation that allows the creation of users’ profiles based on their behaviour, online habits and preferences. These can be used to predict their future actions and decisions. Profiling is frequently used for behavioural advertisements in which aggregated personal data is transferred to other ISPs, who use it to direct ads, products and services determined by the person’s past online activity. According to the Federal Prosecutor, this not only violates people’s right to privacy, especially their informational self-determination right, but also interferes with a consumer’s freedom of choice.

Several scholars and researchers have also opposed profiling and behavioural advertising, arguing that it has severe negative consequences. These include (i) denial of credit or loan concessions; (ii) offering different health insurance deals based on a person’s medical history or the nature of activities they engage in; and (iii) offers with adaptive pricing, based on a variety of criteria that involve some level of discrimination. This is problematic because online profiles are limited. A person’s life is based on several aspects apart from the online information which is collected and aggregated. As a result, personal data aggregation, processing and analysis can lead to an incomplete or incorrect picture of an individual, leading to wrongful interventions in their life. Even if the profile is a complete reflection of a person’s life, the choice to have one’s data collected and used for determined purposes must always be the users’.

The suit alleges that Google’s practices are not in consonance with the legal requirement of seeking express consent, including through prominent display within a policy. It suggests that Google be required to take specific consent in order to access the content of emails.

The case also  challenges the fact that Google’s privacy policy does not allow consumers to withdraw consent. This violates consumers’ control over their data. Further, it is also argued that consent should be sought afresh every time Google changes its privacy policy. The lack of clear and precise information around how data is processed is another issue that has been pointed out in the case, violating the right of Gmail users to information regarding the usage of their data.

To substantiate its case, the Federal Prosecutor is relying on an Italian case in which Google’s data processing activities had been challenged. The ruling was based on Italy’s Data Privacy Code, which establishes data protection guarantees such as i) fair and lawful processing of data; ii) specific, explicit and legitimate purposes and use of data; iii) processing to not be excessive in relation to the purposes for which it is collected or subsequently processed; and iv) that the data must only be kept for the amount of time truly necessary. In addition, the law stipulates that a data subject must receive notice about how their data will be processed, allowing them to make an informed decision. Furthermore, the Italian code also requires consent to be express and documented in writing.

In 2014, Garante’s (i.e. the Italian Data Privacy Authority, furthermore “the Authority”) decision held that Google had failed to comply with some requirements under the Italian legislation. Firstly, the information given by Google around how data processing was carried out was considered insufficient, as it was too general. Secondly, the consent format given through the privacy policy agreement was also held to be too broad. The Authority held that consent should be prior and specific to the data treatment. Although the decision condemned the company’s practices, it did not establish any guidelines for Google to adopt in this regard.

Through the present suit, the Brazilian Federal Prosecutor seeks (i) suspension of Google’s email content analysis, that is, scanning of emails of Gmail users where express consent has not been received ; (ii) an obligation to obtain express and consent from users before scanning or analysing the content of emails and (iii) ensuring the possibility of consent withdrawal. The suit seeks an order directing Google to change its privacy policy to ensure consent is informed and particular to content analysis.

This case demonstrates a new aspect of data protection concern. Apart from the most common cases over data breach situations, where the damage is usually too late or too massive to repair, the Brazilian and the Italian cases are great examples of proactive measures taken to minimise  future risks. Further, the importance of a legal framework that utilises data protection principles to guarantee consumers’ right to privacy is well recognised. Now, it appears that these rules are starting to be more effectively enforced and, in consequence, the right to privacy can be observed in practice.

[1] Raissa is a law student from Brazil with an interest in internet law and policy. Raissa has been interning with the civil liberties team at CCG for the past month.

Privacy Concerns Under the HIV Bill 2014

The Human Immunodeficiency Virus and Acquired Immune Deficiency Syndrome (Prevention and Control) Bill, 2014 (the HIV Bill) is likely to be tabled in the Rajya Sabha in the current winter session. The HIV Bill is aimed at preventing and controlling the spread of Human Immunodeficiency Virus (HIV) and Acquired Immune Deficiency Syndrome (AIDS) and protecting the human rights of those affected by HIV and AIDS.

Important human rights considerations under the HIV Bill include prohibiting discrimination against HIV+ persons and also addressing the causes from which such discrimination stems. Lack of safeguards for sensitive medical information such as a person’s HIV status and the subsequent use of this information for other purposes enhance the scope for discrimination. In an attempt to address this, the Bill imposes several obligations on central and state governments, healthcare providers and establishments (such as organisations, cooperative societies etc.). This post examines the provisions relating to three critical aspects of the HIV Bill – informed consent, disclosure of information and clauses related to confidentiality.

INFORMED CONSENT

Clause 2(n) defines “informed consent” under the HIV Bill. There are two elements to this definition. The first element stipulates that consent must be without any coercion, undue influence, fraud, mistake or misrepresentation. The second element requires that consent must be obtained after being informed of the risks, benefits and alternatives to the proposed intervention and in a language or manner that can be understood by the individual giving consent.

Clause 5 of the HIV Bill mandates that informed consent must be sought before subjecting any person to an HIV test, or if an HIV+ person or persons residing with her are subjected to any medical treatment, intervention or research. If the person in question is incapable of giving consent, it is to be sought from her representative.

Further, this clause stipulates that informed consent includes counselling both before and after such a test is conducted.

Clause 6 of the HIV Bill lays down four exceptions where medical interventions can be carried out without obtaining such consent. The first exception pertains to a court order that may require a person to undergo an HIV test if the court feels that this information is necessary to determine the issues before it.

The second exception allows the procuring, processing, distribution or use of a human body or parts (such as tissues, blood, semen or other bodily fluids) for medical research or therapy. This exception is extremely broad in its scope. The Bill does not define either ‘medical research’ or ‘therapy’. It is difficult to ascertain the exact purpose for this exemption based on the text of the Bill alone. Furthermore, it is unclear why an exception should be made for medical research at all. For example, South Africa’s ‘National HIV Counselling and Testing Policy Guidelines’ require informed consent to be in writing in the context of research and clinical trials. This exception also states that if the person undergoing the test requests its result prior to donation, she would only be entitled to it after having undergone post-test counseling.

The third exception deals with HIV tests for epidemiological or surveillance purposes where the test is anonymous and not for the purpose of determining a person’s HIV status. However, the subjects of these tests are required to be informed of the purposes of such a study. Again, despite the fact that the test is anonymous, it is unclear why the obligation to seek informed consent has been done away with. Participation in any study must be voluntary and based on an informed decision.

The final exception allows an HIV test to be conducted for screening purposes in licensed blood banks.

DISCLOSURE OF HIV STATUS

Clause 8 provides that no person can be compelled to disclose their own HIV status unless required to do so ‘by an order’ which states that the disclosure is necessary in the interest of justice or for the determination of issues before it. This clause fails to mention that the order must be by a competent court. The Parliamentary Standing Committee Report on this Bill had recommended this addition citing ambiguity in the existing provision. However, the HIV Bill has not been amended to reflect this recommendation.

The HIV Bill states that any person who has information about another’s HIV status or any other private information, which was either imparted in confidence or in a fiduciary relationship, cannot disclose or be compelled to disclose such information except with the informed consent of that person. This clause requires the consent to be recorded in writing.

However, the Bill envisages six situations where such disclosure may be made without seeking informed consent.

The first exception deals with disclosure made to another healthcare provider who is involved in the treatment or counseling of that person, provided that the disclosure is necessary for the treatment.

The second exception allows disclosure pursuant to an order of a court when the information is necessary in the interest of justice or for determination of any issue before it. Seeing that this exception permits disclosure specifically pursuant to a court order, there is no reasonable explanation for the vague drafting of the first part of Clause 8.

The third exception permits disclosure in suits or legal proceedings when such information is necessary for filing the proceedings or instructing one’s lawyers.

The fourth exception allows a physician or a counsellor to disclose the HIV+ status of a person to his or her partner if they reasonably believe that the partner is at significant risk of HIV transmission. However, Clause 9 stipulates safeguards for this. Such disclosure is only permissible if the HIV+ person has been counseled to inform their partner and the physician or counsellor is satisfied that this is not likely to happen. They are under an additional obligation to inform the HIV+ person of their intention to disclose this information to their partner. This information can only be disclosed in person and after the partner has been counselled.

Clause 9 further provides that if the HIV+ person is a woman who is at the risk of being abandoned or abused (physically or mentally) as a result of such disclosure, the counsellor or physician has an obligation to not inform her partner. This clause also absolves the physician or counsellor from any civil or criminal liability arising out of disclosure or non-disclosure under this clause.

The fifth exception allows disclosure if it relates to statistical or other information if it is reasonably clear that it cannot lead to that person’s identification. The last exception permits disclosure to officers of the central and state governments or the State AIDS Control Society for the purposes of monitoring, evaluation or supervision. This exception is also couched in extremely broad and vague terms. Ideally, the law must explicitly mention the specific Authority or officers who may have access to this information.

OBLIGATIONS OF ESTABLISHMENTS

Clause 11 of the HIV Bill requires every establishment (body corporate, co-operative society, organisations etc.) to adopt data protection measures to store HIV related information of persons. These measures will be framed by way of guidelines by the government, including mechanisms for accountability and liability.

SPECIAL PROCEDURE IN COURT

The HIV Bill also incorporates procedures to ensure confidentiality during judicial processes. It allows the court to pass an order to – a) suppress the identity of a person by using a pseudonym; b) hold the proceedings in camera; or c) restrain any publication that would disclose the identity of such person, if an application is made to this effect.

PENALTIES

It is pertinent to note that the HIV Bill makes no mention of any penalty for a breach of obligations under Clause 5 (pertaining to informed consent) and Clause 8 (pertaining to disclosure of information).

It also mandates every state government to create an Ombudsperson to hear complaints but almost all aspects pertaining to the Ombudsman’s qualifications, functions, jurisdiction have been left to delegated legislation by the relevant state. Further, Clause 24 stipulates that the Ombudsperson can inquire into violations ‘in relation to healthcare services by any person…’. While this might include violations related to informed consent, it remains unclear if the scope of the Ombudsman’s powers will include complaints related to unlawful disclosure of information.

The Bill must be welcomed for introducing procedural safeguards in medical interventions related to HIV+ persons. However, a lot of the provisions, including exceptions, suffer from over breadth and vagueness. Furthermore, the absence of any penalty for breach of provisions relating to informed consent and disclosure of information almost render these safeguards futile.

Intermediary Liability Again: Google India. vs. Visaka Industries

A Brief Background

In 2009, a defamation case was filed by Visaka Industries Ltd. (the ‘Company’) against a group called Ban Asbestos Network India (‘BANI’), its coordinator Mr. Gopal Krishna and Google India. The Company is involved in the manufacturing and selling of asbestos cement sheets and allied products. The Company had alleged that some of the blogposts written by Mr. Gopal Krishna which were posted on the blog owned by BANI were defamatory in nature. The blogposts contained scathing criticism of the Company for allegedly enjoying political patronage and making profits from products manufactured from asbestos. The Company has also arraigned Google India as a party since the blog was hosted on the blog-publishing service of Google called Blogger.

In its petition before the metropolitan magistrate, the Company accused that Google India is guilty of the following offences under the Indian Penal Code, 1860 (‘IPC’): (i) criminal conspiracy (Section 120-B IPC), defamation (Section 500 IPC) and publishing defamatory content (Section 501 read with Section 34 IPC). It was further alleged that Google India failed to remove the alleged defamatory content despite being brought to its notice.

While the case was pending before the metropolitan magistrate, Google India approached the Andhra Pradesh High Court (‘High Court’) under Section 482 of the Code of Criminal Procedure, 1973 praying for quashing of all the criminal charges levelled against it. Google India contented that it cannot be held liable for criminal defamation under IPC as it is not the publisher of the alleged defamatory content. Google India or Google Inc. are only intermediaries and service providers that act as a platform for end users to upload their content. Consequently, intermediaries like Google India or Google Inc. cannot be held liable in view of Section 79 of the Information Technology Act, 2000 (‘IT Act’) for defamation since they are neither authors nor publishers of such content.

The High Court, however, dismissed Google India’s petition through its order dated April 19, 2014. The High Court while referring to Section 79(3)(b) of the IT Act held that Google India failed to take any action either to block or stop such dissemination of objectionable material despite the Company issuing a notice and bringing the defamatory material to the knowledge of Google India. Therefore, the High Court refused to grant exemptions available to intermediaries under the IT Act to Google India either under the un-amended or the amended Section 79 of the IT Act which amendment took effect from October 27, 2009. The High Court further refused to drop the defamation charges against Google India.

Being aggrieved by the order of the High Court, Google India filed an appeal before the Supreme Court of India in 2011. Since then, the matter has been adjourned on several instances and was recently heard by a SC bench. The latest date of hearing being on November 24, 2016.

Hearing on November 24, 2016

The hearing commenced with Mr. Tushar Mehta, Additional Solicitor General of India appearing for the Union of India mentioning the matter before a two-judge bench of the Supreme Court comprising of Justice Dipak Mishra and Justice Amitava Roy. Mr. Mehta mentioned that on the last date of hearing i.e., November 10, 2016, the Court had passed an order seeking the Attorney General’s assistance in the matter. However, since the Attorney General had a conflict of interest in the matter having appeared for one of the parties previously, Mr. Mehta stated that he would be appearing on behalf of the Attorney General.

Mr. C.A. Sundaram, Senior Advocate appearing on behalf of Google India commenced his arguments by highlighting the following issues which he sought to address before the Court:

  1. What is the scope and extent of Section 79 of the IT Act vis-à-vis defamation cases?
  2. Is an intermediary a publisher for the purposes of Section 499 of IPC?
  3. At what stage should an intermediary remove content hosted by it? Should it remove the content pursuant to only a request made by a third party or should it take down content pursuant to an executive order or a court order?

Justice Dipak Mishra while recapitulating the previous hearings stated that the Court was of the view that an intermediary can be said to have knowledge of the objectionable content through an order passed by a court or through a government notification. Keeping the above opinion in mind, Justice Mishra reckoned that Google India should not be liable in the present case since it had not received knowledge of the objectionable material since neither a court order nor a government notification was passed in regard to the same.

Mr. Sundaram further contended that the knowledge of an intermediary should be considered only in case of receipt of an order passed by a court of law and not in case of an executive order. Justice Mishra expressed his reservations regarding this contention. To advance his argument, Mr. Sundaram referred to Section 69A of the IT Act which confers powers on the Central Government to issue directions to any Government agency or intermediary to block any information for public access through any computer resource. As per the provision, the Central Government can do so on the grounds that “it is necessary or expedient so to do, in the interest of sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above”. Mr. Sundaram tried to draw a distinction between the grounds as mentioned in Section 69A and Article 19(2) of the Constitution of India which specifically provides for ‘defamation’ as a reasonable restriction to freedom of speech and expression. He contended that the executive does not have the power to issue orders for blocking of content under Section 69A of the IT Act on the ground of defamation.

He further argued that before issuing an order for blocking of content on the ground that such material is defamatory in nature, it is necessary to prove the same. According to him, such determination can only be made by a court of law. Hence, he argued that knowledge should be attributed to an intermediary only on the receipt of a judicial/court order and not a government notification or executive order.

After hearing Mr. Sundaram’s submissions on this point, Justice Mishra opined that there seems to be some substance in his contention. Justice Mishra inquired from Mr. K.V. Vishwanathan, Senior Advocate appearing for the Company whether the government can decide if the content is defamatory or not.

Mr. Vishwanathan submitted that the aspects of blocking, taking down of content and fixing liability of the intermediaries have different connotations. He further countered the argument previously made by Mr. Sundaram that Google Inc. and Google India are two separate entities. He referred to the definition of ‘intermediary’ as contained in Section 2(1)(w) of the IT Act which includes ‘search engines’. Hence, he contended that there should be no difference in treatment of Google Inc. and Google India for the purpose of the present case.

On the issue whether an intermediary can be treated as a publisher of the content, Mr Sundaram argued that an intermediary cannot be held to be a publisher of the content. However, if such intermediary fails to take any action despite having knowledge of such content through a takedown order, then it can be held to be the publisher of such content.

Mr. Vishwanathan contended that it is an internationally accepted position that an intermediary can be held to be liable as a publisher of defamatory material if it had the knowledge of such material.

Mr. Tushar Mehta as a concluding remark stated that free speech is an absolute right with reasonable restrictions contained under Article 19(2). However, situations such as the present case merit judicial intervention to decide the contours of free speech.

The next date of hearing has been fixed for January 19, 2017.

NDTV INDIA BAN: A CASE OF REGULATORY OVERREACH AND INSIDIOUS CENSORSHIP?

In a highly contentious move, the Ministry of Information and Broadcasting (‘MIB’) issued an order banning the telecast of the Hindi news channel ‘NDTV India’ on 9th November, 2016. The MIB imposed this ‘token penalty’ on NDTV India following the recommendation of an Inter-Ministerial Committee (‘IMC’). The IMC had found the channel liable for revealing “strategically sensitive information” during the coverage of Pathankot terrorist attacks on 4th January, 2016. The ban has, however, been put on hold by the MIB after the Supreme Court agreed to hear a writ petition filed by NDTV India against the ban.

The order passed by the MIB raises some important legal issues regarding the freedom of speech and expression of the press. Since the news channels are constantly in the race for garnering Television Rating Points, they may sometimes overlook the letter of the law while covering sensitive incidents such as terrorist attacks. In such cases, regulation of the media becomes necessary. However, it is tricky to achieve an optimum balance between the various concerns at play here – the freedom of expression of the press and the people’s right to information, public interest and national security.

In this post, we discuss the background of the NDTV India case and the legal issues arising from it. We also analyze and highlight the effects of governmental regulation of the media and its impact on the freedom of speech and expression of the media.

NDTV Case – A Brief Background:

On January 29, 2016, the MIB had issued a show cause notice to NDTV India alleging that their coverage of the Pathankot military airbase attack had revealed vital information which could be used by terror operators to impede the counter-operations carried by the security forces. The notice also provided details regarding the alleged sensitive information revealed by NDTV India.

In its defence, the channel claimed that the coverage had been “balanced and responsible” and that it was committed to the highest levels of journalism. The channel also stated that the sensitive information allegedly revealed by the channel regarding critical defence assets and location of the terrorists was already available in the public domain at the time of reporting. It was also pointed out that other news channels which had reported on similar information had not been hauled up by the MIB.

However, the MIB, in its order dated January 2, 2016, held that NDTV India’s coverage contravened Rule 6(1)(p) of the Programme and Advertising Code (the ‘Programme Code’ or ‘Code’) issued under the Cable TV Network Rules, 1994 (‘Cable TV Rules’). In exercise of its powers under the Cable TV Networks (Regulation) Act, 1995 (‘Cable TV Act’) and the Guidelines for Uplinking of Television Channels from India, 2011, the MIB imposed a ‘token penalty’ of a day’s ban on the broadcast of the channel.

Rule 6(1)(p) of the Programme Code:

Rule 6 of the Code sets out the restrictions on the content of programmes and advertisements that can be broadcasted on cable TV. Rule 6(1)(p) and (q) were added recently. Rule 6(1)(p) was introduced after concerns were expressed regarding the real-time coverage of sensitive incidents like the Mumbai and Gurdaspur terror attacks by Indian media. It seeks to prevent disclosure of sensitive information during such live coverage that could act as possible information sources for terror operators.

Rule 6(1)(p) states that: “No programme should be carried in the cable service which contains live coverage of any anti-terrorist operation by security forces, wherein media coverage shall be restricted to periodic briefing by an officer designated by the appropriate Government, till such operation concludes.

Explanation: For the purposes of this clause, it is clarified that “anti-terrorist operation” means such operation undertaken to bring terrorists to justice, which includes all engagements involving justifiable use of force between security forces and terrorists.”

Rule 6(1)(p), though necessary to regulate overzealous media coverage especially during incidents like terrorist attacks, is vague and ambiguous in its phrasing. The term ‘live coverage’ has not been defined in the Cable TV Rules, which makes it difficult to assess its precise meaning and scope. It is unclear whether ‘live coverage’ means only live video feed of the operations or whether live updates through media reporting without visuals will also be considered ‘live coverage’.

Further, the explanation to Rule 6(1)(p) also leaves a lot of room for subjective interpretation. It is unclear whether the expression “to bring terrorists to justice” implies the counter operations should result in fatalities of the terrorists or if the intention is to include the coverage of the trial and conviction of the terrorists, if they were caught alive. If so, it would be highly impractical to bar such coverage under Rule 6(1)(p). The inherent vagueness of this provision gives wide discretion to the governmental authorities to decide whether channels have violated the provisions of the Code.

In this context, it is important to highlight that the Supreme Court had struck down Section 66A of the Information and Technology Act, 2000 in the case of Shreya Singhal vs. Union of India, on the ground of being vague and overboard. The Court had held that the vague and imprecise nature of the provision had a chilling effect on the freedom of speech and expression. Following from this, it will be interesting to see the stand of the Supreme Court when it tests the constitutionality of Rule 6(1)(p) in light of the strict standards laid down in Shreya Singhal and a spate of other judgments.

Freedom of Speech under Article 19(1)(a)

The right of the media to report news is rooted in the fundamental right to free speech and expression guaranteed under Article 19(1)(a) of the Constitution of India. Every right has a corresponding duty, and accordingly, the right of the media to report news is accompanied by a duty to function responsibly while reporting information in the interest of the public. The freedom of the media is not absolute or unbridled, and reasonable restrictions can be placed on it under Article 19(2).

In the present case, it can be argued that Rule 6(1)(p) fails to pass the scrutiny of Article 19(2) due to inherent vagueness in the text of the provision. However, the Supreme Court may be reluctant to deem the provision unconstitutional. This reluctance was demonstrated for instance, when the challenge to the constitutionality of the Cinematograph Act, 1952 and its attendant guidelines, for containing vague restrictions in the context of certifying films, was dismissed by the Supreme Court. The Censor Board has used the wide discretion available to it for placing unreasonable restrictions while certifying films. If the Supreme Court continues to allow such restrictions on the freedom of speech and expression, the Programme Code is likely to survive judicial scrutiny.

Who should regulate?

Another important issue that the Supreme Court should decide in the present case is whether the MIB had the power to impose such a ban on NDTV India. Under the current regulatory regime, there are no statutory bodies governing media infractions. However, there are self-regulatory bodies like the News Broadcast Standards Authority (NBSA) and the Broadcasting Content Complaint’s Council (BCCC).The NBSA is an independent body set up by the News Broadcasters Association for regulating news and current affairs channels. The BCCC is a complaint redressal system established by the Indian Broadcasting Foundation for the non-news sector and is headed by retired judges of the Supreme Court and High Courts. Both the NBSA and the BCCC regularly look into complaints regarding violations of the Programme Code. These bodies are also authorized to issue advisories, condemn, levy penalties and direct channels to be taken off air if found in contravention of the Programme Code.

The decision of the MIB was predicated on the recommendation made by IMC which comprises solely of government officials with no journalistic or legal background. The MIB should have considered referring the matter to a regulatory body with domain expertise like the NBSA that addresses such matters on a regular basis or at least should have sought their opinion before arriving at its decision.

Way Forward

Freedom of expression of the press and the impartial and fair scrutiny of government actions and policies is imperative for a healthy democracy. Carte blanche powers with the government to regulate the media as stipulated by Cable TV Act without judicial or other oversight mechanisms pose a serious threat to free speech and the independence of the fourth estate.

The imposition of the ban against NDTV India by the MIB under vague and uncertain provisions can be argued as a case of regulatory overreach and insidious censorship. The perils of such executive intrusion on the freedom of the media will have a chilling effect on the freedom of speech. This can impact the vibrancy of the public discourse and the free flow of information and ideas which sustains a democracy. Although the governmental decision has been stayed, the Supreme Court should intervene and clarify the import of the vague terms used in the Programme Code to ensure that the freedom of the press is not compromised and fair and impartial news reporting is not stifled under the threat of executive action.

CCWG ploughs on with WS2: ICANN57

With 3141 participants in attendance, ICANN57 (held from 3-9 November 2016) was the largest public meeting in its history. It was also the first meeting to be held after the successful completion of the IANA Transition. The transition greenlit the enforcement of the provisions of the IANA Stewardship Transition Proposal, which consisted of two documents: the IANA Stewardship Transition Coordination Group (ICG) proposal and the Cross-Community Working Group on Enhancing ICANN Accountability (CCWG-Accountability) Work Stream 1 Report. Our previous posts analysing these recommendations can be found here.

The meeting week was preceded by a full day face-to-face meeting of the CCWG-Accountability on the 2nd of November. The group met to continue its discussion on Work Stream 2 (WS2), which officially kicked off during the previous meeting in Helsinki. Rapporteurs from many of the WS2 Drafting Teams and subgroups presented updates on the progress of work in the preceding months. This post captures some of the key updates.

Jurisdiction

ICANN’s incorporation and physical location in California has long been a source of contention for governments and other stakeholders. Jurisdiction directly impacts the manner in which ICANN and its accountability mechanisms are structured (for example, the sole designator model arises from the California Corporations Code). Greg Shatan, co-rapporteur of the Jurisdiction subgroup presented an update document on the progress of this group. While the current bylaws state that ICANN shall remain headquartered in California, stakeholders were interested to see whether the subgroup would look into the matter of relocation. It was stated during this meeting that the subgroup has determined that it will not be investigating the issue of changing ICANN’s headquarters or incorporation jurisdiction. However, should a problem yield no other solution in the future, this option will then be examined.

A substantial issue found to be within the scope of this subgroup’s mandate is that of “the influence of ICANN’s existing jurisdictions relating to resolution of disputes (i.e., “Choice of Law” and “Venue”) on the actual operation of policies and accountability mechanisms”. The group’s working draft analysis of this issue can be accessed here. Another mandate from Annex 12 of the WS1 report requires the subgroup to study the ‘multilayer jurisdiction issue’. This has been discussed in some detail in the draft document, which can be accessed here.

One of the concerns raised during the discussion was that the subgroup would not recommend any change and conclude in favour of the status quo. Reassurance was sought that this would not be the case. The rapporteur stated in response that one cannot predict the outcome of the group as there are no internal preconceptions. It was also pointed out that since the discussion ran the risk of being purely academic, it was important to get external opinions. Accordingly, it was agreed that a survey would be sent out to hear from registries, registrars, and others. Advice will also be sought from ICANN Legal.

Transparency

ICANN has often been criticised for a lack of transparency in its functioning. This has largely been attributed to its hybrid structure, which is argued to not have the necessary active, passive, and participatory transparency structures. WS1 of the CCWG-Accountability attempted to address some of these concerns. The inclusion of inspection rights is one such example. However, a significant part of the work has been left for WS2.

This subgroup has made significant progress and shared the first draft of its report, which can be read here. This document discusses the right to information, ICANN’s Documentary Information Disclosure Policy (DIDP), proactive disclosures, and ICANN’s whistleblower protection framework. A suggestion was made to include requiring transparency in Board deliberations, which will be considered by the subgroup. There was also some discussion on increasing the scope of the proactive disclosures for greater transparency. Suggestions included disclosure of Board speaking fees and requiring disclosures of contracts of amounts lower than $1 million (the current threshold for disclosure) as well. There was also a discussion on ‘harm’ as an exception to disclosure, and the need to define it carefully. A revised draft of the report will be shared in the coming weeks, incorporating the points raised during this meeting.

Supporting Organisation (SO)/Advisory Committee (AC) Accountability

With the SOs and ACs being given greater powers under the Empowered Community, it is essential to ensure that they themselves do not remain unchecked. Accordingly, SO/AC reviews need to take place. This subgroup is tasked with the mandate of determining the most suitable manner of enhancing accountability. During this meeting, four identified tracks of activities were presented: (i) SO/AC effectiveness; (ii) evaluating the proposal of a ‘mutual accountability roundtable’; (iii) developing a detailed plan on how to increase SO/AC accountability; and (iv) assessing whether the Independent Review Process (IRP) should also apply to SO/AC activities.

Preliminary discussions have taken place on the first two tracks. It was decided that track 3 could not begin without some input from the SO/ACs. Accordingly, a list of questions was developed with the aim of better understanding the specific modalities of each organization. After a brief discussion, it was decided that this list would be sent to the SO/ACs.

Apart from these updates there was also a discussion on the Accountability and Transparency Review Team (ATRT) 3 and an interaction with the ICANN CEO.

ATRT3 and WS2:

During the Helsinki meeting, it was pointed out that the 3rd review of the Accountability and Transparency Review Team (ATRT3), scheduled to begin work in January, would have a significant overlap with WS2 topics (6 out of the 9 topics). After some discussion, it was decided that a letter would be sent to bring this to the attention of the ICANN Board. This letter also laid out possible ways to proceed:

  1. Option 1- ATRT3 and WS2 work in parallel, with a procedure to reconcile conflicting recommendations.
  2. Option 2- Delay ATRT3 until WS2 is completed.
  3. Option 3- Limit the scope of ATRT3 to assessing the implementation of ATRT2. ATRT4 can then make a full assessment of accountability and transparency issues before 2022 (preferred path).
  4. Option 4- ATRT3 continues with its full scope, with CCWG focusing only on the remaining issues. The ATRT recommendations could then be discussed by CCWG.

The Board’s response stated that while this was of concern, it was a decision to be made by the larger community, and brought it to the attention of the SOs and ACs. In Hyderabad it was decided that CCWG-Accountability will continue to follow up with the Board on this issue, while the SO/ACs deliberate internally as well.

Exchange with ICANN CEO

ICANN CEO Göran Marby’s meeting with CCWG-Accountability was arguably the most engaging session of the day. Central to this discussion was his recent announcement about a new office called the ICANN Complaints Officer. This person “will receive, investigate and respond to complaints about the ICANN organization’s effectiveness, and will be responsible for all complaints systems and mechanisms across the ICANN organization”. It was also stated that they would report to ICANN’s General Counsel. The last provision was not received well by members of the CCWG-Accountability, who stressed on the need for independence. It was pointed out that having the Complaints Officer report to the General Counsel creates a conflict of interest, as it is the legal team’s responsibility to protect ICANN. Though this was raised several times, Marby insisted that he did not think it was an issue, and asked that this be given a fair chance. This discussion was allotted extra time towards the end of the meeting, and there seemed to be a general agreement that the role and independence of the Complaints Officer needed greater thought and clarity. However, this remains the CEO’s decision, and any input provided by CCWG-Accountability will merely be advisory. It will be interesting to see whether he decides to take into account the strong concerns raised by this group.

The substantial discussions in WS2 are only just kicking off, with some subgroups (such as the Diversity subgroup) yet to begin their deliberations. The Transparency subgroup is making good progress with its draft document, on which CCWG-Accountability input is always welcome. It will be worth keeping an eye on the Jurisdiction subgroup, as this remains a divisive issue with political and national interests in the balance. Much remains to be done in the SO/AC Accountability subgroup, which is working to better understand the specific internal working of each SO/AC. This is an extremely important issue, especially in light of the new accountability structures created in WS1. CCWG-Accountability remains an open group that anyone interested can join as a participant or observer.

 

Evaluating the Risks of the Internet of Things

By Dhruv Somayajula[1]

Introduction

On 21st October 2016, multiple cyber-attacks on the Internet infrastructure company Dyn shut down web browsing across America and Europe for hours. Over 100,000 devices were reportedly connected via a malware botnet named Mirai for this attack. The attack was a Distributed Denial of Service attack (DDoS), which is carried out by flooding the bandwidth of a web server with artificial traffic from multiple devices. This causes it to crash and renders it inaccessible. This attack specifically was carried out by using a medley of devices  connected over the internet, including security and street view cameras used for industrial security.

The Dyn attack was another reminder to the global community about the potential dangers of unregulated devices connected over the internet, otherwise known as the ‘Internet of Things’ (IOT). This post, the first of a two-part series, will examine the IOT framework, its practical applications and the risks associated with it. The second part will discuss the challenges to law that IOT may possibly create, the existing legal framework to deal with them, and the areas where change is required to accommodate the IOT.

What is the Internet of Things?

First coined by Kevin Ashton, the phrase ‘Internet of Things’ describes the network of devices connected via the internet promoting a smarter way of life. Any device with a function that connects it to the internet is a part of the IOT. These devices include smart home devices, cameras, wi-fi routers, television sets and smart cars.

 A comprehensive definition of the ‘Internet of Things’ is offered by the International Telecommunications Union (ITU) which defines it as “a global infrastructure for the information society, enabling advanced services by interconnecting (physical and virtual) things based on existing and evolving interoperable information and communication technologies.” The Indian Government, in a draft policy released last year, defined the ‘Internet of Things’ as “a seamless connected network of embedded objects/ devices, with identifiers, in which M2M communication without any human intervention is possible using standard and interoperable communication protocols.” This definition only covers a small subset of the IOT since it makes exclusive reference to machine-to-machine communication (M2M communication). This includes only isolated device-to-device communication through embedded hardware and cellular or wired networks. In general, however, the IOT is a broader collective of devices, which also includes communication of data through wireless and cloud-based networks.

Uses and Applications of the Internet of Things

The IOT operates as a network of devices that can share data among themselves to help create convenience for people, by creating patterns of daily activity and executing them. This convenience is in relation to both ease of living, as well as adding value to necessary infrastructure.

There are many practical applications for consumers using IOT devices, including through the usage of wearable devices, sensors for quantification of personal data, and home automation. The use of smartwatches and trackable bands for fitness is an example of devices sharing data over the IOT. Quantified self apps, which claim to track one’s heart rate, calories consumed sleep cycles through sensors for keeping track of one’s habits are examples of sensor-based devices on IOT networks. Another growing category of devices for personal consumption is home automation, where light bulbs, thermostats and alarm clocks are connected to each other in a smart home.

However, in addition to consumer-oriented uses, smart cities like Barcelona, Amsterdam and Singapore are using IOT  to improve road safety management, traffic diversion into alternate routes, waste accumulation triggers and water management portals by use of data accumulated from sensors. For example, the project Autonomous Intersection Management was designed to demonstrate how smart cars can avoid traffic congestion at intersections through the Internet of Things. The UN Commission for Broadband Commission for Sustainable Development also identifies specific IOT devices as useful for developing industries, including devices that can collect medical data to check for epidemics, measure water quality, enable remote access to irrigation pumps in farms and monitor wildlife.

Risks posed by increasing use of the IOT

The collection of data through the IOT creates databases for accurately predicting actions. This accumulation of sensitive data (including mapping of personal habits, geo-tracking, video recording on CCTVs and home electricity patterns) needs to be safeguarded against cyber-attacks or theft. Information concerning the activity patterns of consumers can be mapped through the data collected to accurately predict the activities of a person, and this power can be susceptible to misuse in the wrong hands.

This is where the fundamental risks of the IOT lie – in the twin issues of security and privacy. The DDoS attack on Dyn last month was caused by an estimated 100,000 unsecured devices, using malware to flood the server with requests, causing it to crash. Moreover, recent security breaches by online hacker groups using the IOT create a legitimate concern for the safety of the devices used on the IOT and a need for evaluation of India’s level of preparedness for a possible attack. Breaches of IOT devices in the past have led to disastrous consequences, such as a smart car being switched off remotely in a busy intersection, or baby cams activated to spy on over 700 people. A huge number of devices, especially pre-2000s devices, have extremely low protection due to outdated standards and are vulnerable to cyber-attacks. The onus is on the industry to reduce the gap between the vulnerabilities of older devices and the global standards for cyber security adopted by IOT devices.

Attacks such as that on Dyn also raise questions about the safety of the data which the device seeks to utilize for its application, and whether a person’s privacy can be breached by way of these cyber-attacks. A smart city monitoring roadways and controlling traffic, or an automated smart lock used for home security, can also potentially be breached by hackers, or misused for surveillance purposes. These concerns will only grow with the increasing adoption of IOT devices. A secure IOT framework would need to include include robust laws on security standards, data protection and privacy. The next post in this series will examine the legal framework for data protection with particular reference to the IOT in India and across the world, and evaluate how Indian laws can best accommodate the challenges thrown up by the rising use of online devices.

[1] Dhruv is a third year student at NALSAR University of Law, Hyderabad. Dhruv is currently interning at CCG.

 

Protecting Critical Information Infrastructures in India

Last month, around thirty two lakh debit cards of various banks in India were compromised through a large scale cyber malware attack. As the biggest security breach ever experienced by the financial sector in India, this attack has also been described as the “first major successful attack on a critical information infrastructure in India”. It is to be noted that the breach failed to be promptly identified despite governmental bodies like the Reserve Bank of India (RBI) and the Computer Emergency Response Team-India (CERT-In) having issued advisories to banks to secure their information infrastructures against cyber criminals. This incident highlights the ever increasing vulnerability of information infrastructures in general to cyber attacks. This post lays down the legal and institutional framework dealing with the protection of critical information infrastructures in India.

The financial sector is only one of the many sectors which are now critically reliant on information infrastructures. Information infrastructures including computers, servers, storage devices, routers, and other equipment support the functioning of critical national capabilities such as power grids, emergency communications systems, e-governance and air traffic control networks, to name only a few.Such infrastructures are considered “critical”- due to their contribution to the services delivered by the infrastructure providers, as well as on account of the potential impact of any sudden failure on the well being and security of the nation.

These information infrastructures are especially vulnerable to cyber attacks and breaches. This is because, firstly, critical information infrastructures (“CII” or “CIIs”) are deeply interconnected and complex by design and also geographically dispersed. These infrastructures are especially vulnerable to attacks, as dedicated weapons systems or armies are not necessary to disable these systems. Any delays or disruptions in the functioning of these critical information systems can potentially spread across other CII, resulting in political, economic, social or national instability. The increasingly high dependence of critical sectors on CIIs coupled with the wide variety of threats they are vulnerable to, necessitate the need for an effective policy and institutional framework to protect CIIs.

“Protected Systems” under the IT Act

The Information Technology Act, 2000 (“IT Act”) provides the legislative basis for the protection of critical information infrastructure in India. Section 70 of the IT Act defines “critical information infrastructure” to be “the computer resource, the incapacitation or destruction of which, shall have debilitating impact on national security, economy, public health or safety”. Under this provision, any computer resource which directly or indirectly affects the facility of CII may be declared to be a “protected system” by the appropriate Government. Securing or attempting to secure unauthorized access to such protected systems is punishable. The Central Government has been vested with the authority to prescribe the information security practices and procedures for such protected systems.

Various computer resources have been notified as “protected systems” by the Central Government and other State Governments. In 2010, the TETRA Secured Communication System Network, its hardware and software installed at various locations in New Delhi was notified as a “protected system” by the Central Government. In 2015, the Central Government notified “Unique Identification Authority of India’s (UIDAI) Central Identities Data Repository  facilities, information assets, logistics infrastructure and dependencies installed at various locations” as a protected system. More recently, the Central Government declared the Long Range Identification and Tracking (LRIT) system under the Ministry of Shipping, its facilities, information, assets, logistics infrastructure and dependencies to be a protected system. State Governments including Tamil Nadu, Chattisgarh and Goa have also identified and declared different information infrastructures as protected systems. It is to be noted, however, that there is no exhaustive list of notified protected systems to be found in the public domain. Further, the indiscriminate declaration of information infrastructures as protected systems, as done by various State governments, is problematic. For instance, the “entire network of computer resources….including websites of the government and government undertakings” was declared to be “protected systems” by the Chattisgarh Government. Firstly, these infrastructures do not “directly or indirectly affect the facility of a critical information Infrastructure” and secondly, a high quantum of punishment can be meted out for an attempt to secure access to such protected systems. In light of this, the declaration of infrastructures as “protected systems” needs to be a calibrated and considered process, and should be clarified by the Government.

Institutional Framework for Protection of CII

Under Section 70A(1) of the IT Act, the Central Government is vested with the power to designate an organization of the Government as the national nodal agency in respect of the protection of CII. Towards this, in 2014, the Central Government notified the National Critical Information Infrastructure Protection Centre (NCIIPC), an organization under the National Technical Research Organization (NTRO) as the relevant nodal agency. Correspondingly, the Information Technology (National Critical Information Infrastructure Protection Centre and Manner of Performing Functions and Duties) Rules, 2013 (“NCIIPC Rules”) were also notified. Under the NCIIPC Rules, a “critical sector” has been defined to mean sectors, which are critical to the nation and whose incapacitation or destruction will have a debilitating impact on national security, economy, public health or safety. On the NCIIPC website, these sectors have been classified into five main groups; (i) power and energy; (ii) banking, financial services and insurance (“BSFI”); (iii) ICTs; (iv) transportation and (v) e-governance and strategic public enterprises. Unlike the critical sectors identified under the Strategic Approach of the Ministry of Electronics and Information Technology, the sectors identified by the NCIIPC do not include the defence sector. The defence sector has also been excluded from its purview under the NCIIPC Rules (Rule 3(4)).

While the Guidelines for the Protection of CII (Version 2.0) issued by the NCIIPC provide a basic framework for the protection of CII, it is both urgent and necessary to consultatively evolve sector-specific guidelines for the protection of these infrastructures. In this regard, while guidelines for the BSFI sector have been issued by agencies like the RBI and SEBI, critical sectors such as power and energy or transportation are yet to be provided with specific guidelines for the protection of their information infrastructures. It has also been argued that the effectiveness of the NCIIPC is undermined by virtue of being inaccessible to the public. Thiscriticismis bolstered, for instance, by the very limited information made available to the public on the NCIIPC website. The opacity of the institutional framework can also prove to be a roadblock in the coordination of cybersecurity efforts, especially for effective public-private collaboration to protect CIIs. This is particularly important because of the large number of CIIs in the private sector. Further, standard operating procedures for the notification of CIIs and the identification of public private partnerships are yet to be issued. No doubt, the notification of the NCIIPC as the nodal agency for the protection of CII has been a commendable step ahead in the protection of CII in the country. However, much work remains to be done and both the NCIIPC and the Government must proactively work with the private sector to ensure that our CIIs are secure and resilient against cyber attacks.