The Supreme Court’s Free Speech To-Do List

Written by members of the Civil Liberties team at CCG

The Supreme Court of India is often tasked with adjudicating disputes that shape the course of free speech in India. Here’s a roundup up of some key cases currently before the Supreme Court.

Kamlesh Vaswani vs. Union of India

A PIL petition was filed in 2013 seeking a ban on pornography in India. The petition also prayed for a direction to the Union Government to “treat watching of porn videos and sharing as non-bailable and cognizable offence.”

During the course of the proceedings, the Department of Telecommunications ordered ISPs to block over 800 websites allegedly hosting pornographic content. This was despite the freedom of expression and privacy related concerns raised before the Supreme Court. The Government argued that the list of websites had been submitted to the DoT by the petitioners, who blocked the websites without any verification. The ban was revoked after much criticism.

The case, currently pending before the Supreme Court, also presented implications for the intermediary liability regime in India. Internet Service Providers may claim safe harbor from liability for content they host, as long as they satisfy certain due diligence requirements under Sec. 79 of the IT Act, read with the Information Technology (Intermediaries Guidelines) Rules, 2011. After the Supreme Court read down these provisions in Shreya Singhal v. Union of India, the primary obligation is to comply with Court orders seeking takedown of content. The petition before the Supreme Court seeks to impose an additional obligation on ISPs to identify and block all pornographic content, or risk being held liable. Our work on this case can be found here.

Sabu Mathew George vs. Union of India

This is a 2008 case, where a writ petition was filed to ban ‘advertisements’ relating to pre-natal sex determination from search engines in India. Several orders have been passed, and the state has now created a nodal agency that would provide search engines with details of websites to block. The ‘doctrine of auto-block’ is an important consideration in this case -in one of the orders the Court listed roughly 40 search terms and stated that respondents should ensure that any attempt at looking up these terms would be ‘auto-blocked’, which raises concerns about intermediary liability and free speech.

Currently, a note has been filed by the petitioners advocate, which states that search engines have the capacity to takedown such content, and even upon intimation, only end up taking down certain links and not others. Our work on this case can be found on the following links – 1, 2, 3.

Prajwala vs. Union of India

This is a 2015 case, where an NGO (named Prajwala) sent the Supreme Court a letter raising concerns about videos of sexual violence being distributed on the internet. The letter sought to bring attention to the existence of such videos, as well as their rampant circulation on online platforms.

Based on the contents of the letter, a suo moto petition was registered. Google, Facebook, WhatsApp, Yahoo and Microsoft were also impleaded as parties. A committee was constituted to “assist and advise this Court on the feasibility of ensuring that videos depicting rape, gang rape and child pornography are not available for circulation” . The relevant order, which discusses the committee’s recommendations can be found here. One of the stated objectives of the committee was to examine technological solutions to the problem – for instance, auto-blocking. This raises issues related to intermediary liability and free speech.

‘My Data, My Rules’ – The Right to Data Portability

By Aditya Singh Chawla

Nandan Nilekani has recently made news cautioning against ‘data colonization’ by heavyweights such as Facebook and Google. He laments that data, which is otherwise a non-rival, unlimited resource, is not being shared freely, and is being put into silos. Not only does this limit its potential uses, users end up with very little control over their own data. He argues for ‘data democracy’ through a data protection law and particularly, one that gives users greater privacy, control and choice. In specific terms, Nilekani appears to be referring to the ‘right to data portability’, a recently recognized concept in the data protection lexicon.

In the course of using online services, individuals typically provide an assortment of personal data to service providers. The right to data portability allows a user to receive their data back in a format that is conducive to reuse with another service. The purpose of data portability is to promote interoperability between systems and to give greater choice and control to the user with respect to their data held by other entities. The aim is also to create a level playing field for newly established service providers that wish to take on incumbents, but are unable to do so because of the significant barriers posed by lock-in and network effects. For instance, Apple Music users could switch to a rival service without having to lose playlists, play counts, or history; or Amazon users could port purchasing history to a service that provides better recommendations; or eBay sellers to a more preferable platform without losing their reputation and ratings. Users could also port to services with more privacy friendly policies, thereby enabling an environment where services must also compete on such metrics.

The European Union’s General Data Protection Regulation (GDPR) is the first legal recognition of the right to data portability. Art. 20(1) defines the right as follows:

“The data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format and have the right to transmit those data to another controller without hindrance from the controller to which the data have been provided”

Pursuant to this right, Art. 20(2) further confers the right to directly transmit personal data from one controller to another, wherever technically feasible.

The first aspect of the right to data portability allows data subjects to receive their personal data for private use. Crucially, the data must be a in a format necessarily conducive to reuse. For instance, providing copies of emails in pdf format would not be sufficient. The second aspect is the ability to transfer data directly to another controller, without hindrance.

There are certain prerequisites for the applicability of this right:

a) it applies only to personal data that the data subject ‘provided’ to the controller. This would include data explicitly provided (such as age, or address, etc., through online forms), as well as data generated and collected by the controller on account of the usage of the service. Data derived or inferred by the controller would not be within the scope of this right.

b) the processing must be pursuant to consent or a contract. Personal data processed for a task to be performed in public interest, or in the exercise of official authority is excluded.

c) the processing must be through automated means. Data in paper files would therefore not be portable.

d) the right must not adversely affect the rights and freedoms of others.

The GDPR does not come into force till May 2018, so there remain ambiguities regarding how the right to data portability may come to be implemented. For instance, there is debate about whether ‘observed data’, such as heartbeat tracking by wearables, would be portable. Even so, the right to data portability appears to be a step towards mitigating the influence data giants currently wield.

Data Portability is premised on the principle of informational self-determination, which forms the substance of the European Data Protection framework.  This concept was famously articulated in what is known as the Census decision of the German Federal Constitutional Court in 1983. The Court ruled it to be a necessary condition for the free development of one’s personality, and also an essential element of a democratic society.  The petitioners in India’s Aadhaar-PAN case also  explicitly argued that informational self-determination was a facet of Art. 21 of the Indian Constitution.

Data portability may also be considered an evolution from previously recognized rights such as the right to access and the right to erasure of personal data, both of which are present in the current Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011. TRAI’s recent consultation paper on Privacy, Security and Ownership of Data in the Telecom Sector also refers to data portability as a way to empower users. The right to data portability may be an essential aspect of a robust and modern data protection framework, and India is evidently not averse to taking cues from the EU in this regard. As we (finally) begin to formulate our own data protection law, it may serve us well to evaluate which concepts may be suitably imported.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

Google Faces Legal Hurdles Under Brazilian Internet Law

By Raissa Campagnaro[1]

The Brazilian Federal Prosecution Ministry has brought civil proceedings against Google for flouting its data protection law. The suit challenges Google’s access to the content of emails exchanged by Gmail users on multiple grounds, including Google’s failure to obtain express consent.

In October, 2016, Brazil’s Federal Prosecutor filed a public civil suit against Google, claiming that the search engine had failed to comply with the country’s internet law, the Internet Bill of Rights. The suit argues that during a previous prosecution investigation, through a civil inquiry, Google had made it public that it scans the content of emails exchanged by Gmail users. According to the Federal Prosecutor, this violates Brazilian data protection standards.

The Internet Bill of Rights establishes data protection principles similar to those set up under the EU Data Protection Directive 95/46/EC. Under this law, any processing of data must be pursuant to express consent. The law specifically requires that the clause seeking consent be prominently displayed and easy to identify amongst other terms of the contract. The law also recognises a right to not have one’s data transferred to third parties without consent and a right to be informed about the specific purposes of the personal data collection, usage, storage, treatment and protection.

When asked about its compliance with the legislation, Google submitted that it analyses the email messages so it can improve consumers’ user experience by filtering the messages for unwanted content, spam, or other kind of malware. It also submitted that the scanning of messages is used to offer products and advertisement for the user and to classify emails into various categories such as ‘social’ ‘promotions’ etc. Finally, Google has contended that the scanning of emails is  consented to by the user at the time of signing up, by agreeing to the privacy policy within Gmail’s terms of service.

However, the Federal Prosecution Ministry considers these practices to be ‘profiling’ – a consequence of personal data aggregation that allows the creation of users’ profiles based on their behaviour, online habits and preferences. These can be used to predict their future actions and decisions. Profiling is frequently used for behavioural advertisements in which aggregated personal data is transferred to other ISPs, who use it to direct ads, products and services determined by the person’s past online activity. According to the Federal Prosecutor, this not only violates people’s right to privacy, especially their informational self-determination right, but also interferes with a consumer’s freedom of choice.

Several scholars and researchers have also opposed profiling and behavioural advertising, arguing that it has severe negative consequences. These include (i) denial of credit or loan concessions; (ii) offering different health insurance deals based on a person’s medical history or the nature of activities they engage in; and (iii) offers with adaptive pricing, based on a variety of criteria that involve some level of discrimination. This is problematic because online profiles are limited. A person’s life is based on several aspects apart from the online information which is collected and aggregated. As a result, personal data aggregation, processing and analysis can lead to an incomplete or incorrect picture of an individual, leading to wrongful interventions in their life. Even if the profile is a complete reflection of a person’s life, the choice to have one’s data collected and used for determined purposes must always be the users’.

The suit alleges that Google’s practices are not in consonance with the legal requirement of seeking express consent, including through prominent display within a policy. It suggests that Google be required to take specific consent in order to access the content of emails.

The case also  challenges the fact that Google’s privacy policy does not allow consumers to withdraw consent. This violates consumers’ control over their data. Further, it is also argued that consent should be sought afresh every time Google changes its privacy policy. The lack of clear and precise information around how data is processed is another issue that has been pointed out in the case, violating the right of Gmail users to information regarding the usage of their data.

To substantiate its case, the Federal Prosecutor is relying on an Italian case in which Google’s data processing activities had been challenged. The ruling was based on Italy’s Data Privacy Code, which establishes data protection guarantees such as i) fair and lawful processing of data; ii) specific, explicit and legitimate purposes and use of data; iii) processing to not be excessive in relation to the purposes for which it is collected or subsequently processed; and iv) that the data must only be kept for the amount of time truly necessary. In addition, the law stipulates that a data subject must receive notice about how their data will be processed, allowing them to make an informed decision. Furthermore, the Italian code also requires consent to be express and documented in writing.

In 2014, Garante’s (i.e. the Italian Data Privacy Authority, furthermore “the Authority”) decision held that Google had failed to comply with some requirements under the Italian legislation. Firstly, the information given by Google around how data processing was carried out was considered insufficient, as it was too general. Secondly, the consent format given through the privacy policy agreement was also held to be too broad. The Authority held that consent should be prior and specific to the data treatment. Although the decision condemned the company’s practices, it did not establish any guidelines for Google to adopt in this regard.

Through the present suit, the Brazilian Federal Prosecutor seeks (i) suspension of Google’s email content analysis, that is, scanning of emails of Gmail users where express consent has not been received ; (ii) an obligation to obtain express and consent from users before scanning or analysing the content of emails and (iii) ensuring the possibility of consent withdrawal. The suit seeks an order directing Google to change its privacy policy to ensure consent is informed and particular to content analysis.

This case demonstrates a new aspect of data protection concern. Apart from the most common cases over data breach situations, where the damage is usually too late or too massive to repair, the Brazilian and the Italian cases are great examples of proactive measures taken to minimise  future risks. Further, the importance of a legal framework that utilises data protection principles to guarantee consumers’ right to privacy is well recognised. Now, it appears that these rules are starting to be more effectively enforced and, in consequence, the right to privacy can be observed in practice.

[1] Raissa is a law student from Brazil with an interest in internet law and policy. Raissa has been interning with the civil liberties team at CCG for the past month.

Pakistan, Sri Lanka, and Nepal get their own version of YouTube

Written by Nakul Nayak

In a significant development, Google announced yesterday that it has launched a localized version of its immensely popular video-sharing website YouTube in Pakistan, Nepal, and Sri Lanka. With this launch, users in these countries will access country-specific homepages. Moreover the architecture of the site’s pages (and videos) will be tailored such that the YouTube experience will include the “most relevant videos” of a user’s country. It may be noted here that YouTube is already available in Nepali, Sinhalese, and Urdu.

The case of Pakistan is especially interesting because of YouTube’s frequent run-ins with the country’s administration over carrying blasphemous content. In fact, YouTube was banned in Pakistan in 2012 after the infamous film “Innocence of Muslims”, which was uploaded on and accessible through its site, created widespread public furore. The Supreme Court of Pakistan at that time insisted on the continuation of the ban till such time as a method was found to block all blasphemous content. Even though YouTube is now localized for Pakistani content, reports indicate that the ban on the accessibility to the website continues to persist. However, at least one report stated that users in different parts of Pakistan found “that the site was accessible under ‘https’ protocol.”

BytesforAll, an NGO based in Pakistan, had filed a case before the Lahore High Court in 2013, challenging the government’s blocking of YouTube. The case is still being heard. It may be worthy to recall here that just last month, the European Court of Human Rights in Cengiz v. Turkey had found Turkey’s blocking of YouTube to be violative of the right to receive and impart information. Unfortunately, the ECtHR judgment is available only in French. However, the official press release to the judgment stated that the Court

observed that YouTube was a single platform which enabled information of specific interest, particularly on political and social matters, to be broadcast. It was therefore an important source of communication and the blocking order precluded access to specific information which it was not possible to access by other means. Moreover, the platform permitted the emergence of citizen journalism which could impart political information not conveyed by traditional media.

Moving forward, it would be interesting to note how the Pakistani government reacts to YouTube’s move of localized domain, language, and content; whether it decides to unblock YouTube or continues its ban. Moreover, the battle in the YouTube case in the Lahore High Court may take a decisive turn, with the Court more open to trusting a localised website, catering to the needs and legal regulations of Pakistan.

Update: 19 January 2016

Reports state that Pakistan has officially lifted its ban on users’ access to YouTube. This major development comes in the wake of Google’s launching a localised version of YouTube, tailor-made for the Pakistani audience (discussed above). Note that the current ban on YouTube was imposed after the Supreme Court of Pakistan directed that all “offending material” should be blocked from the site. But in an update tendered to the Supreme Court on Saturday, Dawn newspaper reports that “it was not possible to block access to the ‘Innocence of Muslims’ clip — that caused an uproar in the Muslim world — without blocking the website’s IP address, which meant cutting all access to YouTube.

Now, with a localised version of the site, the Government can ask YouTube to take down material and, as per the reported Government statement, YouTube would “accordingly restrict access”. However, at the same time, any content removal request to Google will be reviewed on the anvils of its own Community Guidelines and will be taken down only if it violates the same. According to Reuters, Google said in a statement “[w]e have clear community guidelines, and when videos violate those rules, we remove them … Where we have launched YouTube locally and we are notified that a video is illegal in that country, we may restrict access to it after a thorough review.

In the coming weeks, it will be interesting to note the course of action taken by the Pakistani Government if a takedown request is not complied with by Google.

Can the EU beat Big Data and the NSA? An Overview of the Max Schrems saga

Written by Siddharth Manohar


The decision in the famous and controversial Schrems case (press release) delivered last month has created confusion with respect to the rules applicable to companies transporting data out of the EU and into the USA. The case arose in light of Edward Snowden’s revelations regarding data handling by companies like Google and Facebook in the face of extensive acquisition of user information by US security agencies.

The matter came up before the Court of Justice of the European Union (CJEU) on referral from the High Court of Ireland. The case dealt with the permissibility and legality of a legal instrument known as the Safe Harbour Agreement. The Safe Harbour Agreement regulates transfer of data from the EU to US by internet companies. The effectiveness of this regulation was thrown into serious doubt following revelations by Edward Snowden regarding large scale surveillance carried out by USA state agencies, such as the NSA, by accessing users’ private data.

The agreement was negotiated between the US and the EU in 2000, and allowed American internet companies to transfer data from the European Economic Area to other countries without having to undertake the cumbersome task of complying with each individual EU country’s privacy laws. It contained a set of principles that legalized data transfer out of the EU by US companies which demonstrated adherence to a certain set of data handling policies. More than an enforceable standard to protect users’ data, it was a legal framework which served the purpose of giving the European Commission a basis to claim that data transfer to the USA was legal under European laws.

The Safe Harbour Agreement was meant to simplify compliance with the 1995 Data Protection Directive of the European Union, which laid down fundamental principles to be upheld in processing and handling of personal data. A 2000 decision of the European Commission held that the Safe Harbour Agreement ensured adequacy of data protection and privacy of data as required by this Directive, and came to be popularly known as the “Safe Harbour decision”. Since then, over 4,000 companies signed on to the Agreement in order to register themselves to legally export data out of the EU and into the USA.

After the Snowden leak however, it became clear that these principles were blatantly violated on a large scale. It was in this context that Maximilian Schrems, an Austrian law student, approached the Irish Data Protection authority complaining that US laws did not provide adequate protection to users’ private data against surveillance, as required by the Data Protection Directive. The Data Protection Authority dismissed the complaint, and Schrems then chose to appeal to the Irish High Court. The High Court, having heard the petition, chose to refer an important question to the CJEU: whether the 2000 EC decision, which upheld the Safe Harbour Agreement as satisfying the requirements of the EU Data Protection Directive, meant that national data protection authorities were prevented from taking up complaints against transfer of a person’s data as violating the Directive.

The CJEU answered emphatically in the negative, emphasising that a mere finding by the Commission of adequate data protection policy by an external country could not take away the powers of national data protection authorities. The national authority could therefore independently investigate privacy claims against a private US company handling an EU citizen’s data.

The CJEU also found that legislations authorising the interference of state authorities with data handling of private companies had complete overriding effect over the provisions of the Safe Harbour Agreement. This was based on a two-pronged reasoning – firstly, that the data acquired by state agencies was processed in ways above and beyond what was necessary for protecting national security. Secondly, users whose data had been acquired by the authorities had no legal recourse to challenge such an action or have that data erased. For these reasons, it ruled the Safe Harbour Agreement as failing the requirements of the EU Data Protection Directive.

This decision created a fair amount of deliberation regarding what made data transfer from the EU to the US legally valid, since the main legal basis for it had just been struck down. However, the interesting point to note here is that the Agreement is not the only legal basis for such data transfer. Further, for the data transfer to be held illegal, individual handlers of data would now have to be challenged at forums of national data protection authorities to be held as illegal. Thus the decision importantly does not pull a curtain down on all data transfer from EU to US; however, the legal machinery of the Safe Harbour Agreement has rightly been found to be ineffective.

Therefore, while internet companies do not need to shut down operations in EU, they do need to review their data handling practices, and adherence of these practices to other available norms, like the EU’s model clauses for data transfer to external countries. Some companies have even gone a step ahead and tried to come up with solutions to the vacuum left behind by the Safe Harbour Agreement, like Microsoft, as it does in this blog post by the head of its legal department.

That said, the EU has issued a statement that an agreement needs to be reached with US companies by January 2016, failing which it will consider stronger enforcement measures, such as coordinated action taken by each of the EU countries’ data protection authorities. The scenario is still an evolving one, and this shake-up can positively lead to better enforced privacy and data protection principles.

Freedom of Speech & Google Search- Preliminary Notes for India: Working Paper by Ujwala Uppaluri

As the Internet progressively becomes a key means by which information is communicated and exchanged, there is a growing need to examine how the applications that facilitate access to these troves of information operate.

Search engines have come to play a critical role in the digital information landscape. In India the question of search is currently a subject of investigation and more recently a fine by the Competition Commission of India. More recently the question of what search engines can list in their results has come up before the Indian Supreme Court.


Google and other search engines have argued that their algorithm’s ranking of search results was an exercise in editorial discretion, available to all speakers as a First Amendment right. This has laid the groundwork for claims of search engines’ rights to freedom of speech. However, in the recent landmark judgment of Shreya Singhal v. Union of India, the Supreme Court had during the oral hearing stated that intermediaries do not have free speech rights.

Against this backdrop, this paper very briefly introduces comparative scholarship around search and the constitutional right to free speech and takes the first steps to making that the argument for the need to regulate important participants such as search engines in the information landscape, and for the need to construct and clarify Article 19(1)(a) frameworks to ensure rights adjudication to such regulation result in balanced outcomes.

The Complete Paper can be found here:

(Ujwala Uppaluri was a Fellow at CCG from June 2014 to April 2015 and will be joining Harvard Law School to pursue her LL.M. from August 2015.)