Facebook and its (dis)contents

By Adhitya Singh Chawla

In 2016, Norwegian writer Tom Egeland, uploaded a post on Facebook, listing seven photographs that “changed the history of warfare”. The post featured the Pulitzer-winning image, ‘The Terror of War’, which depicts a naked nine-year-old running from a napalm attack during the Vietnam War. Facebook deleted the post, and suspended Egeland’s account.

A Norwegian newspaper, Aftenposten, while reporting on the suspension, used the same image on its Facebook page. The newspaper soon received a message from Facebook demanding that the image be either removed, or pixelated. The editor-in-chief refused to comply in an open letter to Mark Zuckerburg, noting his concern at the immense power Facebook wielded over speech online. The issue escalated when several Norwegian politicians, including the Prime Minister, shared the image on Facebook, and were temporarily suspended from Facebook as well.

Facebook initially stated that it would be difficult to create a distinction between instances where a photograph of a nude child could be allowed. However, due to widespread censure, the platform eventually decided to reinstate the image owing to its “status as an iconic image of historical importance.”

This incident brought to light the tricky position Facebook finds itself in as it attempts to police its platform. Facebook addresses illegal and inappropriate content through a mix of automated processes, and human moderation. The company publishes guidelines about what content may not be appropriate for its platform, called its ‘Community Standards.’ Users can ‘flag’ content that they think does not meet the Community Standards, which is then reviewed by moderators. Moderators may delete, ignore, or escalate flagged content to a senior manager. In some cases, the user account may be suspended, or asked to submit identity verification.

As evident from the ‘Terrors of War’ incident, Facebook has often come under fire for supposed ‘wrong’ moderation of content, as well as opacity in how its community review process comes to be applied. It has been argued that content that is evidently in violation of Community Standards is often not taken down, while content that should be safe is censored. For instance, Facebook courted controversy again, when it was accused of blocking content and accounts documenting persecution of the Rohingya Muslim community in Myanmar.

Closer home as well, multiple instances of Facebook’s questionable moderation practices have come to light. In October 2017, Raya Sarkar, a law student based out of the United States, had created what came to be called, the List. The List named over 70 prominent academics that had been accused of sexual harassment. The approach proved extremely controversial, sparking debates about due process, and the failure of institutional mechanisms to address harassment. Facebook blocked her account for seven days, which proved equally contentious. Sarkar’s account was restored only after Facebook staff in Palo Alto were contacted directly. Similar instances have been reported of seemingly arbitrary application of the Community Standards. In many cases accounts have been suspended, and content blocked without notice, explanation or recourse.

Content moderation inherently involves much scope for interpretation and disagreement. Factors such as context, as well as cultural differences, render it a highly subjective exercise. Algorithms don’t appear to have reached sufficient levels of sophistication, and there exist larger issues associated with automated censoring of speech. Human moderators are by all accounts burdened by the volume and the psychologically taxing nature of the work, and therefore prone to error. The way forward should therefore be first, to ensure that transparent mechanisms exist for recourse against the removal of legitimate speech.

In light of the ‘Terror of War’ incident, Facebook responded by updating its community standards. In a statement, it said that it would allow graphic material that would be “newsworthy, significant, or important to the public interest — even if they might otherwise violate our standards.” Leaked moderator guidelines in 2017 opened the company up to granular public critique of its policies. There is evidently scope for Facebook to be more responsive and consultative in how it regulates speech online.

In June 2017, Facebook reached 2 billion monthly users, making it the largest social network, and a platform for digital interaction without precedent. It has announced plans to reach 5 billion. With the influence it now wields, it must also embrace its responsibility to be more transparent and accountable to its users.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

The Supreme Court’s Free Speech To-Do List

Written by members of the Civil Liberties team at CCG

The Supreme Court of India is often tasked with adjudicating disputes that shape the course of free speech in India. Here’s a roundup up of some key cases currently before the Supreme Court.

Kamlesh Vaswani vs. Union of India

A PIL petition was filed in 2013 seeking a ban on pornography in India. The petition also prayed for a direction to the Union Government to “treat watching of porn videos and sharing as non-bailable and cognizable offence.”

During the course of the proceedings, the Department of Telecommunications ordered ISPs to block over 800 websites allegedly hosting pornographic content. This was despite the freedom of expression and privacy related concerns raised before the Supreme Court. The Government argued that the list of websites had been submitted to the DoT by the petitioners, who blocked the websites without any verification. The ban was revoked after much criticism.

The case, currently pending before the Supreme Court, also presented implications for the intermediary liability regime in India. Internet Service Providers may claim safe harbor from liability for content they host, as long as they satisfy certain due diligence requirements under Sec. 79 of the IT Act, read with the Information Technology (Intermediaries Guidelines) Rules, 2011. After the Supreme Court read down these provisions in Shreya Singhal v. Union of India, the primary obligation is to comply with Court orders seeking takedown of content. The petition before the Supreme Court seeks to impose an additional obligation on ISPs to identify and block all pornographic content, or risk being held liable. Our work on this case can be found here.

Sabu Mathew George vs. Union of India

This is a 2008 case, where a writ petition was filed to ban ‘advertisements’ relating to pre-natal sex determination from search engines in India. Several orders have been passed, and the state has now created a nodal agency that would provide search engines with details of websites to block. The ‘doctrine of auto-block’ is an important consideration in this case -in one of the orders the Court listed roughly 40 search terms and stated that respondents should ensure that any attempt at looking up these terms would be ‘auto-blocked’, which raises concerns about intermediary liability and free speech.

Currently, a note has been filed by the petitioners advocate, which states that search engines have the capacity to takedown such content, and even upon intimation, only end up taking down certain links and not others. Our work on this case can be found on the following links – 1, 2, 3.

Prajwala vs. Union of India

This is a 2015 case, where an NGO (named Prajwala) sent the Supreme Court a letter raising concerns about videos of sexual violence being distributed on the internet. The letter sought to bring attention to the existence of such videos, as well as their rampant circulation on online platforms.

Based on the contents of the letter, a suo moto petition was registered. Google, Facebook, WhatsApp, Yahoo and Microsoft were also impleaded as parties. A committee was constituted to “assist and advise this Court on the feasibility of ensuring that videos depicting rape, gang rape and child pornography are not available for circulation” . The relevant order, which discusses the committee’s recommendations can be found here. One of the stated objectives of the committee was to examine technological solutions to the problem – for instance, auto-blocking. This raises issues related to intermediary liability and free speech.

‘My Data, My Rules’ – The Right to Data Portability

By Aditya Singh Chawla

Nandan Nilekani has recently made news cautioning against ‘data colonization’ by heavyweights such as Facebook and Google. He laments that data, which is otherwise a non-rival, unlimited resource, is not being shared freely, and is being put into silos. Not only does this limit its potential uses, users end up with very little control over their own data. He argues for ‘data democracy’ through a data protection law and particularly, one that gives users greater privacy, control and choice. In specific terms, Nilekani appears to be referring to the ‘right to data portability’, a recently recognized concept in the data protection lexicon.

In the course of using online services, individuals typically provide an assortment of personal data to service providers. The right to data portability allows a user to receive their data back in a format that is conducive to reuse with another service. The purpose of data portability is to promote interoperability between systems and to give greater choice and control to the user with respect to their data held by other entities. The aim is also to create a level playing field for newly established service providers that wish to take on incumbents, but are unable to do so because of the significant barriers posed by lock-in and network effects. For instance, Apple Music users could switch to a rival service without having to lose playlists, play counts, or history; or Amazon users could port purchasing history to a service that provides better recommendations; or eBay sellers to a more preferable platform without losing their reputation and ratings. Users could also port to services with more privacy friendly policies, thereby enabling an environment where services must also compete on such metrics.

The European Union’s General Data Protection Regulation (GDPR) is the first legal recognition of the right to data portability. Art. 20(1) defines the right as follows:

“The data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format and have the right to transmit those data to another controller without hindrance from the controller to which the data have been provided”

Pursuant to this right, Art. 20(2) further confers the right to directly transmit personal data from one controller to another, wherever technically feasible.

The first aspect of the right to data portability allows data subjects to receive their personal data for private use. Crucially, the data must be a in a format necessarily conducive to reuse. For instance, providing copies of emails in pdf format would not be sufficient. The second aspect is the ability to transfer data directly to another controller, without hindrance.

There are certain prerequisites for the applicability of this right:

a) it applies only to personal data that the data subject ‘provided’ to the controller. This would include data explicitly provided (such as age, or address, etc., through online forms), as well as data generated and collected by the controller on account of the usage of the service. Data derived or inferred by the controller would not be within the scope of this right.

b) the processing must be pursuant to consent or a contract. Personal data processed for a task to be performed in public interest, or in the exercise of official authority is excluded.

c) the processing must be through automated means. Data in paper files would therefore not be portable.

d) the right must not adversely affect the rights and freedoms of others.

The GDPR does not come into force till May 2018, so there remain ambiguities regarding how the right to data portability may come to be implemented. For instance, there is debate about whether ‘observed data’, such as heartbeat tracking by wearables, would be portable. Even so, the right to data portability appears to be a step towards mitigating the influence data giants currently wield.

Data Portability is premised on the principle of informational self-determination, which forms the substance of the European Data Protection framework.  This concept was famously articulated in what is known as the Census decision of the German Federal Constitutional Court in 1983. The Court ruled it to be a necessary condition for the free development of one’s personality, and also an essential element of a democratic society.  The petitioners in India’s Aadhaar-PAN case also  explicitly argued that informational self-determination was a facet of Art. 21 of the Indian Constitution.

Data portability may also be considered an evolution from previously recognized rights such as the right to access and the right to erasure of personal data, both of which are present in the current Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011. TRAI’s recent consultation paper on Privacy, Security and Ownership of Data in the Telecom Sector also refers to data portability as a way to empower users. The right to data portability may be an essential aspect of a robust and modern data protection framework, and India is evidently not averse to taking cues from the EU in this regard. As we (finally) begin to formulate our own data protection law, it may serve us well to evaluate which concepts may be suitably imported.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi

TRAI releases Regulations enforcing Net Neutrality, prohibits Differential Pricing

Written by Siddharth Manohar

The Telecom Regulatory Authority of India (TRAI) has come out with a set of regulations explicitly prohibiting differential pricing for data services in India.

3. Prohibition of discriminatory tariffs.— (1) No service provider shall offer or charge discriminatory tariffs for data services on the basis of content.

(2) No service provider shall enter into any arrangement, agreement or contract, by whatever name called, with any person, natural or legal, that has the effect of discriminatory tariffs for data services being offered or charged to the consumer on the basis of content

TRAI recently concluded a public consultation process regarding differential pricing in data services (resources). The consultation paper covered all differently-priced or zero-rated services offered through data. The process has witnessed tremendous public participation, with a spirited campaign by Internet activists (Savetheinternet.in) and a counter-campaign by Facebook where it garnered support through users by using the narrative of connecting those who have no access (https://www.facebook.com/savefreebasics).

CCG submitted a formal response as part of this process, which you can read here, and filed an additional counter-comment signed by ten different civil society and research organizations.

The consultation process also involved a public discussion on the questions raised, where the usual suspects were all present – telecom companies arguing for differential pricing, and internet activists against. Also present were startup- and user- representatives.

Facebook’s telecom partner for carrying the Free Basics platform in India —Reliance Communications — was then instructed by TRAI to put a hold on rolling out Free Basics until they came up with a clear position on differential pricing and net neutrality. The regulator later confirmed that they received a compliance report to this effect as well. Facebook had been aggressively pursuing its campaign to collect support in favour of its platform for the entire duration of the public consultation.

TRAI has clarified that these regulations ‘may’ be reviewed after a two year period, or at an earlier time as decided by the Authority. An exception to the prohibition has also been included, to account for emergency services and services offered during ‘times of grave public emergency’. An additional exception is that of closed networks which charge a special tariff for their usage.

[We will shortly update the piece with more analysis of the regulations] 

A Constitutional Right against Free Basics? The Link between Article 19 and Zero Rating

Written by Siddharth Manohar

The past month has witnessed a rise in tide of public debate surrounding net neutrality once more, accompanying the release of another Consultation Paper by TRAI, and another AIB video urging public participation in the ongoing consultation process. To add to this mix there has also been an effort from Facebook to build consensus amongst its userbase regarding the effect of ‘Free Basics’ on net neutrality. The crux of one set of arguments put forth in these debates consists of the harm that a differentially priced platform can cause to competition in the market for Internet applications, along with the related concern of monopolization of a section of the country’s userbase. The other side places emphasis on the need to increase the accessibility of the Internet, and both have disagreements as to the interpretation of the term ‘net neutrality’.

An important issue that gets missed out in the rhetoric is the Fundamental right of Internet users to access a diverse set of media sources on any given platform whose nature is that of a public utility. Media diversity implies that the information stream reaching the public through any public medium must be prevented from being unduly influenced by one or a few entities with a controlling effect on the market for these media content providers. It also rules against any role for the carriers of content (known usually as intermediaries or service providers) in choosing whose or what kind of content is allowed on the medium. The usage and allocation of the medium as a public resource is subject to certain Constitutional principles as well, and these are also ignored while discussing how to regulate (or not) Internet-related services in India.

The Right to be Informed

Article 19 of the Constitution guarantees the right to freedom of expression, but this right also includes the right of citizens to a plural media. As discussed by the Supreme Court in Secretary, Ministry of Information & Broadcasting, Govt. of India v. Cricket Association of Bengal, the debate and opinions sought to be protected by Article 19 need to be informed by a plurality of views and an ‘aware citizenry’. What does this mean for regulation of access to the Internet? It translates into ensuring the possibility of a wide array of options in terms of media consumer choices being made available to the public. Any communication platform cannot remain restricted in its control by one or a few parties. This restricts the nature of the content available through that media, leading to narrowing of the ideas views available to citizens on any public platform.

It is far from difficult to balance this concern with the free market. The principle encourages a competitive atmosphere between content providers, and seeks to avoid a situation where there is a disproportionately dominant player in the market exerting undue influence over the functioning of that market. The presence of a single or few dominant entity(ies) enjoying a magnified impact on the market makes it difficult for newer entrants to make a dent in the market-share of the dominant player, thus reducing the possibility of any competition being provided by these smaller players.

This Constitutional requirement comes in conflict with the concept of zero-rated plans at its core: can we really have a telecom company deciding the exact specific pieces of content that we receive in preference to all other content? Are we willing to hand them this power of shaping consumer choice, public access and opinion simply by choosing the right business partners? If we can conclusively answer these questions in the affirmative, zero-rating plans would have no quarrel with Article 19. Indeed, such an affirmation would even successfully dispense with one of the core tenants of the idea of net neutrality – that all data be treated in the same manner irrespective of its content.

Spectrum as a Public Resource

The Cricket Association of Bengal judgment also discusses the regulation of spectrum as a public resource. This is arguably an even more fundamental question, addressing the question of what qualifies as legitimate usage and allocation of spectrum. The Court characterized airwaves as a scarce public resource, which ought to be used in the best interests of the public, and in a manner that prevents any infractions on their rights. Justice Reddy’s opinion in the judgment even acknowledges the requirement of media plurality as part of the required policy approach for regulating spectrum.

Another SC judgment arguing in a similar vein, Association of Unified Tele Services Providers & Ors. v. Union of India & Ors., ruled that the State is bound to use spectrum resources solely for the enjoyment of the general public. Applying the public trust doctrine, it explained that the resources are prohibited from being used or transferred for any kind of private or commercial interest.

What the available jurisprudence effectively lays down can be encapsulated in the following: Spectrum is a public resource that can only be used and/or allocated by the state for general public benefit, and cannot be used in any manner for private or commercial interests. This public interest contains various concerns, one of them being the right to a diverse set of media content sources, so as to avoid interested parties having any kind of power or control over the content available to consumers. What this means for the State is that spectrum must be used in order to maximise the variety of media available to end-users and prohibit control over the medium of transmission being controlled by a single or few player(s).

This creates a tricky situation for TRAI, who have asked for public comments on the desirability of differential pricing in data services. There is a glaring lack of clarity on the exact mandate provided to the state regarding how to use spectrum resources to achieve TRAI’s officially cited objective of providing ‘free’ Internet access to consumers. Without discussion focusing on the exact nature of what we want to achieve, we will continue to be forced take reactionary positions regarding most issues and developments. Forming a concrete policy to connect India’s billion can only get a whole lot easier once we are able to agree upon a common goal and a set of principles regarding how to get there.

ep049

Image Credit: Everybody Loves Eric Raymond: http://geekz.co.uk/lovesraymond/

Can the EU beat Big Data and the NSA? An Overview of the Max Schrems saga

Written by Siddharth Manohar

Nsa-eagle-white

The decision in the famous and controversial Schrems case (press release) delivered last month has created confusion with respect to the rules applicable to companies transporting data out of the EU and into the USA. The case arose in light of Edward Snowden’s revelations regarding data handling by companies like Google and Facebook in the face of extensive acquisition of user information by US security agencies.

The matter came up before the Court of Justice of the European Union (CJEU) on referral from the High Court of Ireland. The case dealt with the permissibility and legality of a legal instrument known as the Safe Harbour Agreement. The Safe Harbour Agreement regulates transfer of data from the EU to US by internet companies. The effectiveness of this regulation was thrown into serious doubt following revelations by Edward Snowden regarding large scale surveillance carried out by USA state agencies, such as the NSA, by accessing users’ private data.

The agreement was negotiated between the US and the EU in 2000, and allowed American internet companies to transfer data from the European Economic Area to other countries without having to undertake the cumbersome task of complying with each individual EU country’s privacy laws. It contained a set of principles that legalized data transfer out of the EU by US companies which demonstrated adherence to a certain set of data handling policies. More than an enforceable standard to protect users’ data, it was a legal framework which served the purpose of giving the European Commission a basis to claim that data transfer to the USA was legal under European laws.

The Safe Harbour Agreement was meant to simplify compliance with the 1995 Data Protection Directive of the European Union, which laid down fundamental principles to be upheld in processing and handling of personal data. A 2000 decision of the European Commission held that the Safe Harbour Agreement ensured adequacy of data protection and privacy of data as required by this Directive, and came to be popularly known as the “Safe Harbour decision”. Since then, over 4,000 companies signed on to the Agreement in order to register themselves to legally export data out of the EU and into the USA.

After the Snowden leak however, it became clear that these principles were blatantly violated on a large scale. It was in this context that Maximilian Schrems, an Austrian law student, approached the Irish Data Protection authority complaining that US laws did not provide adequate protection to users’ private data against surveillance, as required by the Data Protection Directive. The Data Protection Authority dismissed the complaint, and Schrems then chose to appeal to the Irish High Court. The High Court, having heard the petition, chose to refer an important question to the CJEU: whether the 2000 EC decision, which upheld the Safe Harbour Agreement as satisfying the requirements of the EU Data Protection Directive, meant that national data protection authorities were prevented from taking up complaints against transfer of a person’s data as violating the Directive.

The CJEU answered emphatically in the negative, emphasising that a mere finding by the Commission of adequate data protection policy by an external country could not take away the powers of national data protection authorities. The national authority could therefore independently investigate privacy claims against a private US company handling an EU citizen’s data.

The CJEU also found that legislations authorising the interference of state authorities with data handling of private companies had complete overriding effect over the provisions of the Safe Harbour Agreement. This was based on a two-pronged reasoning – firstly, that the data acquired by state agencies was processed in ways above and beyond what was necessary for protecting national security. Secondly, users whose data had been acquired by the authorities had no legal recourse to challenge such an action or have that data erased. For these reasons, it ruled the Safe Harbour Agreement as failing the requirements of the EU Data Protection Directive.

This decision created a fair amount of deliberation regarding what made data transfer from the EU to the US legally valid, since the main legal basis for it had just been struck down. However, the interesting point to note here is that the Agreement is not the only legal basis for such data transfer. Further, for the data transfer to be held illegal, individual handlers of data would now have to be challenged at forums of national data protection authorities to be held as illegal. Thus the decision importantly does not pull a curtain down on all data transfer from EU to US; however, the legal machinery of the Safe Harbour Agreement has rightly been found to be ineffective.

Therefore, while internet companies do not need to shut down operations in EU, they do need to review their data handling practices, and adherence of these practices to other available norms, like the EU’s model clauses for data transfer to external countries. Some companies have even gone a step ahead and tried to come up with solutions to the vacuum left behind by the Safe Harbour Agreement, like Microsoft, as it does in this blog post by the head of its legal department.

That said, the EU has issued a statement that an agreement needs to be reached with US companies by January 2016, failing which it will consider stronger enforcement measures, such as coordinated action taken by each of the EU countries’ data protection authorities. The scenario is still an evolving one, and this shake-up can positively lead to better enforced privacy and data protection principles.