Google de-platforms Taliban Android app: Speech and Competition implications?

Written by Siddharth Manohar

About a few weeks ago, Google pulled an app from its online application marketplace the Google Play Store, which was developed by the Taliban for propagating violently extremist views and spreading hateful content. Google has stated that its reason for doing this is that the app violated its policy for Google Play Store.

Google maintains a comprehensive policy statement for any app developer who wishes to upload an app for public consumption on the Play Store. The policy, apart from setting up a policy for the Play Store as a marketplace, also places certain substantive conditions on developers using the platform to reach users.

Amongst other restrictions, one head reads ‘Hate Speech’. It says:

We don’t allow the promotion of hatred toward groups of people based on their race or ethnic origin, religion, disability, gender, age, veteran status, or sexual orientation/gender identity.

Google found the Taliban app to violate this stipulation in the Play Store policy, as confirmed by a Google spokesperson, who said that the policies are “designed to provide a great experience for users and developers. That’s why we remove apps from Google Play that violate those policies.” The app was first detected by an online intelligence group which claims to monitor extremist content on social media. It was developed to increase access to the Taliban’s online presence by presenting content in the Pashto language, which is widely spoken in the Afghan region.

The application itself of course still being available for download on a number of other regular websites, the content of its material led to its removal from a marketplace. This is an interesting application of the restriction of hateful speech, because the underlying principle in Google’s policy itself pays heed to the understanding that development and sale of apps forms a kind of free speech.

A potentially interesting debate in this area is the extent to which decisions on the contours of permissible speech can be decided by a private entity on its public platform. The age-old debate about the permissible restrictions on speech can find expression in this particular “marketplace of ideas” of Google Play Store. On one hand, there is the concern of protecting users from harmful and hateful content, speech that targets and vilifies individuals based on some factor of their identity, be it race, gender, caste, colour, or sexual orientation. On the other hand, there will also ever be the concern that the monitoring of speech by the overseeing authority becomes excessive and censors certain kinds of opinions and perspectives from entering the mainstream.

This particular situation provides an easy example in the form of an application developed by an expressly terrorist organisation. It would however still be useful to keep an eye out in the future for the kind of applications that are brought under the ambit of such policies, and the principles justifying these policies.

The question of what, if any, kind of control can be exercised over this kind of editorial power of Google over its marketplace is also a relevant one. Google can no doubt justify its editorial powers in relatively simple terms – it has explicit ownership of the entire platform and can the basis on which to allow developers onto it. However, the Play Store forms an overwhelmingly large percentage of how users access any application on a daily basis. Therefore, Google’s policies on the Play Store have a significant impact on how and whether applications are accessed by users in the context of the entire marketplace of applications and users. The policy implications of this are that the principles of Google’s Play Store policies need to be placed under the scrutiny of how it impacts the entire app development ecosystem. This is evidenced by the fact that the European Commission about a year ago pulled up Google for competition concerns regarding its Android operating system, and has also recently communicated its list of objections to Google. The variety of speech and competition concerns applicable to this context make it one to watch closely for developments of any kind for further analysis.


Image Source: ‘mammela’, Pixabay.

Delhi District Court finds Voyeurism as a violation of the Right to Privacy

Written by Siddharth Manohar

A Delhi district court last week dealt with a case of voyeurism by awarding the accused with a year’s simple imprisonment along with a fine of ten thousand rupees. A point of interest was the characterisation of the offence of voyeurism under Section 354C of the Indian Penal Code in terms of privacy, in the latter part of the judgment. Authored by Justice Susheel Bala Dagar, the portion in question reads:

“Voyeurism is a ridiculous form of enjoyment for men but a mental torture for women. Men who indulge in such enjoyment do not seem to realize that they are infringing on the fundamental right to privacy of her body of the woman. Due to such offenders the women do not feel safe inside such places where she would usually expect not to be observed.”

The offence of voyeurism under the section is defined as “watching, or capturing the image of a woman engaging in a private act in circumstances where she would usually have the expectation of not being observed either”. The application of the right to privacy here employs the conventional understanding of “the right to be left alone”. The Indian Supreme Court has upheld the right to privacy of the person as the right to be left alone, and has read it into Article 21 of the Constitution which guarantees the right to life, which it did in the famed Auto Shankar case.

This conception of the right to privacy was famously expounded in one of the first pieces of academic writing to argue for the existence of a right to privacy. Simply titled ‘The Right to Privacy’, the article was one written by Samuel Warren and Lois Brandeis, the latter of whom would go on to become a judge in the Supreme Court of the USA. This conception of the right to privacy was later transformed into a legal principle through application in American tort law. It encapsulates the relevant aspect of the right as ‘intrusion upon seclusion’, which is said to occur when a person “intentionally intrudes, physically or otherwise, upon the solitude or seclusion of another or his private affairs or concerns”.

The case in question dealt with a 19 year old male peeping into a lavatory whilst in use by a woman. Along with the conception of privacy as described by Warren and Brandeis, the judge also mentions the aspect of being in a place “where she would usually not expect to be observed”. Indeed, even if a potential victim occupies a public place, it has been made clear in a number of later judgments such as Nader v. General Motors Corporation, that a person does not automatically make public everything that they might do in a public space. The point is however moot, as the facts involve circumstances that are clearly more grave and fall clearly into the mischief of the section as described in the Penal Code.

An encouraging thread observable in the District Court judgment was the focus on the activities of the defendant, as opposed to those of the complainant, as is seen to be the case in so many analyses of crimes against women. The judge does make it a point to question the defendant on his lack of a reaction if his alleged unprovoked slap by the complainant was in fact true. This apart, she also points out that the objective of the punishment in this case is not merely to make an example of the accused for the cause of deterrence. The larger objective is to reduce the amount of crimes committed against women, and reformative action forms part of this agenda. To once again quote the judgment,

“The seriousness of the offence lies not in the extent of punishment it carries but on the impact, it has on the social psyche and public order. A societal change is required via education and awareness to curb such kind of crimes. Also, there is a need for formulation and implementation of policies by the government to create sensitization of the masses, more so, the youth in schools and colleges towards the need for gender equality…”

It is extremely heartening to see an example of judicial decision making at the Trial Court level displaying the breadth of vision to not fall prey to the practice of inflicting retributive justice, and more importantly to lend a voice to the articulation of the right to privacy in India. The right to bodily privacy in an enclosed space is one of the basic forms of privacy that can be asserted by an individual, prior to norms of data security in technology, and it is imperative that such a right is clarified in Indian jurisdictions so as to enable further jurisprudence to engage with the larger questions in the field of privacy.

Photo by kellinahandbasket, Flickr. CC License 2.0 Generic.

Anupam Kher’s Cockroach Tweet: Cultural Reference or Hate Speech?

Written by Siddharth Manohar

The noise surrounding the recent controversy regarding a tweet by Indian actor (and UN Ambassador for Gender Equality) Anupam Kher made it difficult to look into why it caught so much attention. That it did is beyond doubt, garnering over six thousand hits, significantly more than almost all of his other tweets. It was also followed by plenty of coverage and promotion from its audience, who responded while sharing their own views as well. Here I try to look at whether there was any basis for the criticism that the tweet received, and the degree to which it was justified.

To start off, it would be useful to reproduce the lines in their original form:

घरों में पेस्ट कंट्रोल होता है तो कॉक्रोच, कीड़े मकोड़े इत्यादि बाहर निकलते है घर साफ़ होता हैवैसे ही आजकल देश का पेस्ट कंट्रोल चल रहा है

Which translates into: “During pest control in houses, the cockroaches and other insects etc. are removed. The house gets cleaned. Similarly, pest control of the country is going on these days.”

On an initial reading, it is a harmless and vague insult. The use of the term ‘cockroach’, which has attracted the most attention, seems to be employed as a characterisation of anything undesirable, be they problems, politics, or people. As a standalone insult, it remains a lot less venomous as compared to some of the other material that one may find on the website. Apart from containing a reference to one of the actor’s films, it is also vague and targets no group explicitly. It is therefore understandable that the issue has its share of people who may be bewildered by what could possibly be quite so harmful in this particular tweet, and are likely to pass off criticism as an overreaction that seems to be increasingly common.

To understand if there is a valid criticism of the tweet, we look at the larger context in which such a term is understood. The comparing of groups of people to animals and pests has a long, concrete, and troubling history. The process has over time and study acquired the name of ‘dehumanisation’, the process by which language and discourse is used to make a group of people seem ‘less-than-human’. It is a widely documented and extremely effective method of incitement to violence.

The reasoning behind its usage in the process is also interesting and relevant. According to Helen Fein (Benesch, 2008), the purpose of this kind of discourse is to put a certain group of people outside the limits of moral considerations and obligations. This is because the default moral understanding of a majority of people is underpinned by the principle that it is unacceptable to carry out violent acts of hate, or to kill any person. The repeated categorisation of a group of people as the ‘other’, and the polarisation of their identity as a group not worthy of human respect or equal rights, has the effect on the mind of the larger public. Acts of violence and crimes start to seem more acceptable and less outrageous when committed against this group, and this process of dehumanisation escalates over time.

The narratives most often target a specific identity, most famously that of ethnicity and religious identity. The most prominent examples of this occur during the inter-war period in Germany, where there was a large amount of material alienating and dehumanising those of Jewish religion. The content was systematically churned out by state agencies instructed with an agenda. Similarly, the build-up to the Rwandan genocide in 1994 saw a very strong narrative which demonised the Tutsi ethnic group in Rwanda, labeling them as Inyenzi (cockroaches) that cannot contribute to society because of who they were, their basic identity. This narrative creates a larger feeling of resentment amongst the public against the people of the target group, making it easier to commit acts of violence against them. Susan Benesch would argue that there cannot in fact be a large scale violent attack against a group of people that live amongst a majority without the cooperation or the tacit acceptance of that larger group of people.

The comparison of people to pests and animals has repeatedly been used as a tool in this process of moulding public sentiment against certain groups of people. In these cases, the narrative that it served to created helped in the execution of large scale genocidal operations that have left millions of people killed over the decades. Dehumanisation has also been included as part of an academic study devising a ten-step model of genocide. The historical evidence is in overwhelming suggestion that the use of such terms to build a narrative is part of a larger build up towards organised violence based on lines of group identity.

To suggest that an Indian actor is sending out a call for violence is ill-thought out, and ignorant of the complexity of the issue. What does need to be observed however, is how easily discussions are used to create and divide identities, and what values are ascribed to these identities. While healthy and vociferous debate forms an important part of a democracy, also equally important is the tangible effect that speech can have on its immediate surroundings. It is the effects and the consequences (and harm) of speech that give rise to justifications for its regulation, and it is therefore always useful to keep a watchful eye on where public discourse takes us.

textspace_1457429885_be702766 (1)

New EU-US Data Protection Agreement Imminent

Written by Siddharth Manohar

Data exchange flowing from the EU (specifically the European Economic Area) to the US currently has no legal framework regulating it. Does it mean that any data transfer from EU to US is illegal?  In my previous post on the issue I mentioned that the old agreement regulating the data transfer had been struck down at the Court of Justice of the European Union (CJEU). National data protection authorities in the EU have taken a pragmatic step by holding back on attacking all data transfer, until a new agreement is reached to replace the old Safe Harbour Agreement.

A breakthrough in this respect came about a couple of weeks back, with the European Commission announcing that they have agreed on a new framework to protect the rights of individuals who give data to US companies that process the data in their local servers. The agreement once finalised will replace the Safe Harbour principles in order to legalise the data transfer. This new framework, called the US-EU Privacy Shield, has three sets of strong obligations: data handling, transparency, and redress mechanisms.

The first major obligation is on US companies to make and publish commitments on data protection and individual rights. These commitments hold them accountable to US Federal Trade Commission (FTC), as well as the diktats of the European Data Protection Authorities (DPAs). The second consists of restrictions on surveillance practices by US state authorities. Any kind of surveillance will now be subject to clear limitations, safeguards and oversight mechanisms, and the methods will be only those that are necessary and proportionate. Mass surveillance has been completely ruled out, and meetings to review these practices have also been planned for future follow-up. The third part of this arrangement consists of a redress mechanism. European DPAs can refer cases to the US Department of Commerce and the FTC, and the option of alternate dispute resolution is also provided.

The parties are now working towards the measures required to put the new agreement in place, specifically the US, who will try to formalise the commitments made in the agreement. The European Commission on the other hand is preparing a draft for an ‘adequacy decision’ that member states can adopt to formalise the process on the EU side. The full text of the agreement is expected to be made available in the coming weeks.

The agreement has also come under criticism from privacy experts, who claim that the agreement suffers from the same weaknesses of the Safe Harbour agreement. They argue that this agreement is a mere political compromise that does not help protect the rights and data of users. This would require amendments to the national laws in both locations. Controversial provisions in US law that continue to authorise infringements on users’ rights are still effective, like Section 702, which allows for surveillance of data relating to non-US persons to be carried out in the US. Executive Order 12333, which deals with surveillance outside of the US, has no legal oversight mechanism whatsoever. It is these laws that will need amendments in order to make surveillance subject to conditions of necessity and proportionality.

The other persistent problems which have remained include the provision for self-certification, which provides inadequate protection against ensuring enforcement of privacy standards. A recent amendment to a Bill which would provide redress mechanisms for EU users to enforce rights over their personal data, also adds to the problems which plague the possible effectiveness of the new agreement. The long term solution to this situation does not look like it will arise from a single event or set of negotiations, and we now await the release of the full text of the agreement to see where we can go from here.


TRAI releases Regulations enforcing Net Neutrality, prohibits Differential Pricing

Written by Siddharth Manohar

The Telecom Regulatory Authority of India (TRAI) has come out with a set of regulations explicitly prohibiting differential pricing for data services in India.

3. Prohibition of discriminatory tariffs.— (1) No service provider shall offer or charge discriminatory tariffs for data services on the basis of content.

(2) No service provider shall enter into any arrangement, agreement or contract, by whatever name called, with any person, natural or legal, that has the effect of discriminatory tariffs for data services being offered or charged to the consumer on the basis of content

TRAI recently concluded a public consultation process regarding differential pricing in data services (resources). The consultation paper covered all differently-priced or zero-rated services offered through data. The process has witnessed tremendous public participation, with a spirited campaign by Internet activists ( and a counter-campaign by Facebook where it garnered support through users by using the narrative of connecting those who have no access (

CCG submitted a formal response as part of this process, which you can read here, and filed an additional counter-comment signed by ten different civil society and research organizations.

The consultation process also involved a public discussion on the questions raised, where the usual suspects were all present – telecom companies arguing for differential pricing, and internet activists against. Also present were startup- and user- representatives.

Facebook’s telecom partner for carrying the Free Basics platform in India —Reliance Communications — was then instructed by TRAI to put a hold on rolling out Free Basics until they came up with a clear position on differential pricing and net neutrality. The regulator later confirmed that they received a compliance report to this effect as well. Facebook had been aggressively pursuing its campaign to collect support in favour of its platform for the entire duration of the public consultation.

TRAI has clarified that these regulations ‘may’ be reviewed after a two year period, or at an earlier time as decided by the Authority. An exception to the prohibition has also been included, to account for emergency services and services offered during ‘times of grave public emergency’. An additional exception is that of closed networks which charge a special tariff for their usage.

[We will shortly update the piece with more analysis of the regulations] 

A Constitutional Right against Free Basics? The Link between Article 19 and Zero Rating

Written by Siddharth Manohar

The past month has witnessed a rise in tide of public debate surrounding net neutrality once more, accompanying the release of another Consultation Paper by TRAI, and another AIB video urging public participation in the ongoing consultation process. To add to this mix there has also been an effort from Facebook to build consensus amongst its userbase regarding the effect of ‘Free Basics’ on net neutrality. The crux of one set of arguments put forth in these debates consists of the harm that a differentially priced platform can cause to competition in the market for Internet applications, along with the related concern of monopolization of a section of the country’s userbase. The other side places emphasis on the need to increase the accessibility of the Internet, and both have disagreements as to the interpretation of the term ‘net neutrality’.

An important issue that gets missed out in the rhetoric is the Fundamental right of Internet users to access a diverse set of media sources on any given platform whose nature is that of a public utility. Media diversity implies that the information stream reaching the public through any public medium must be prevented from being unduly influenced by one or a few entities with a controlling effect on the market for these media content providers. It also rules against any role for the carriers of content (known usually as intermediaries or service providers) in choosing whose or what kind of content is allowed on the medium. The usage and allocation of the medium as a public resource is subject to certain Constitutional principles as well, and these are also ignored while discussing how to regulate (or not) Internet-related services in India.

The Right to be Informed

Article 19 of the Constitution guarantees the right to freedom of expression, but this right also includes the right of citizens to a plural media. As discussed by the Supreme Court in Secretary, Ministry of Information & Broadcasting, Govt. of India v. Cricket Association of Bengal, the debate and opinions sought to be protected by Article 19 need to be informed by a plurality of views and an ‘aware citizenry’. What does this mean for regulation of access to the Internet? It translates into ensuring the possibility of a wide array of options in terms of media consumer choices being made available to the public. Any communication platform cannot remain restricted in its control by one or a few parties. This restricts the nature of the content available through that media, leading to narrowing of the ideas views available to citizens on any public platform.

It is far from difficult to balance this concern with the free market. The principle encourages a competitive atmosphere between content providers, and seeks to avoid a situation where there is a disproportionately dominant player in the market exerting undue influence over the functioning of that market. The presence of a single or few dominant entity(ies) enjoying a magnified impact on the market makes it difficult for newer entrants to make a dent in the market-share of the dominant player, thus reducing the possibility of any competition being provided by these smaller players.

This Constitutional requirement comes in conflict with the concept of zero-rated plans at its core: can we really have a telecom company deciding the exact specific pieces of content that we receive in preference to all other content? Are we willing to hand them this power of shaping consumer choice, public access and opinion simply by choosing the right business partners? If we can conclusively answer these questions in the affirmative, zero-rating plans would have no quarrel with Article 19. Indeed, such an affirmation would even successfully dispense with one of the core tenants of the idea of net neutrality – that all data be treated in the same manner irrespective of its content.

Spectrum as a Public Resource

The Cricket Association of Bengal judgment also discusses the regulation of spectrum as a public resource. This is arguably an even more fundamental question, addressing the question of what qualifies as legitimate usage and allocation of spectrum. The Court characterized airwaves as a scarce public resource, which ought to be used in the best interests of the public, and in a manner that prevents any infractions on their rights. Justice Reddy’s opinion in the judgment even acknowledges the requirement of media plurality as part of the required policy approach for regulating spectrum.

Another SC judgment arguing in a similar vein, Association of Unified Tele Services Providers & Ors. v. Union of India & Ors., ruled that the State is bound to use spectrum resources solely for the enjoyment of the general public. Applying the public trust doctrine, it explained that the resources are prohibited from being used or transferred for any kind of private or commercial interest.

What the available jurisprudence effectively lays down can be encapsulated in the following: Spectrum is a public resource that can only be used and/or allocated by the state for general public benefit, and cannot be used in any manner for private or commercial interests. This public interest contains various concerns, one of them being the right to a diverse set of media content sources, so as to avoid interested parties having any kind of power or control over the content available to consumers. What this means for the State is that spectrum must be used in order to maximise the variety of media available to end-users and prohibit control over the medium of transmission being controlled by a single or few player(s).

This creates a tricky situation for TRAI, who have asked for public comments on the desirability of differential pricing in data services. There is a glaring lack of clarity on the exact mandate provided to the state regarding how to use spectrum resources to achieve TRAI’s officially cited objective of providing ‘free’ Internet access to consumers. Without discussion focusing on the exact nature of what we want to achieve, we will continue to be forced take reactionary positions regarding most issues and developments. Forming a concrete policy to connect India’s billion can only get a whole lot easier once we are able to agree upon a common goal and a set of principles regarding how to get there.


Image Credit: Everybody Loves Eric Raymond:

SC asks Centre how to regulate Sexually Exploitative Content on Social Media

Written by Siddharth Manohar

The Supreme Court on Friday rejected a petition to block websites of dominant social media platforms, on the ground that they were used to spread videos of gang rapes and to facilitate a market for child prostitution. The two Judge bench of Justices UU Lalit and Madan B Lokur reasoned that blocking these sites was not a feasible solution, as it would set a trend of blocking wide parts of internet access to solve specific problems with how it is used.

The decision is in light of a petition filed by Hyderabad-based NGO Prajwala, asking the Court to ban social media websites used to traffic children and put in place a mechanism to monitor the content circulated through mobile applications such as Whatsapp. The same bench had in April recognized the importance of regulating objectionable sexual material being circulated through social media applications. This was based on suo-motu cognizance of a letter addressed to the then Chief Justice of India HL Dattu, asking the Court to take action against those responsible for posting a video of an incident of gang rape on social media.

The Court has asked the Additional Solicitor General to look into why no action was taken against the social media platforms by the police who were dealing with the cases.  The Centre had earlier communicated that it is difficult to monitor content which is circulated through mobile phones, and even more so to find the culprit starting the process. Tracking the user becomes much easier, they said, when a computer is used in spreading the objectionable content.

The Court did however refer to the Central Government the important question of whether these social media platforms can be prosecuted for their role in spreading offensive material such as video recordings of rape and child pornography. The Court added that they would wait for a response from the Central Government before deciding what action ought to be taken in the matter.

Earlier orders in the matters can be accessed here, and here.

Can the EU beat Big Data and the NSA? An Overview of the Max Schrems saga

Written by Siddharth Manohar


The decision in the famous and controversial Schrems case (press release) delivered last month has created confusion with respect to the rules applicable to companies transporting data out of the EU and into the USA. The case arose in light of Edward Snowden’s revelations regarding data handling by companies like Google and Facebook in the face of extensive acquisition of user information by US security agencies.

The matter came up before the Court of Justice of the European Union (CJEU) on referral from the High Court of Ireland. The case dealt with the permissibility and legality of a legal instrument known as the Safe Harbour Agreement. The Safe Harbour Agreement regulates transfer of data from the EU to US by internet companies. The effectiveness of this regulation was thrown into serious doubt following revelations by Edward Snowden regarding large scale surveillance carried out by USA state agencies, such as the NSA, by accessing users’ private data.

The agreement was negotiated between the US and the EU in 2000, and allowed American internet companies to transfer data from the European Economic Area to other countries without having to undertake the cumbersome task of complying with each individual EU country’s privacy laws. It contained a set of principles that legalized data transfer out of the EU by US companies which demonstrated adherence to a certain set of data handling policies. More than an enforceable standard to protect users’ data, it was a legal framework which served the purpose of giving the European Commission a basis to claim that data transfer to the USA was legal under European laws.

The Safe Harbour Agreement was meant to simplify compliance with the 1995 Data Protection Directive of the European Union, which laid down fundamental principles to be upheld in processing and handling of personal data. A 2000 decision of the European Commission held that the Safe Harbour Agreement ensured adequacy of data protection and privacy of data as required by this Directive, and came to be popularly known as the “Safe Harbour decision”. Since then, over 4,000 companies signed on to the Agreement in order to register themselves to legally export data out of the EU and into the USA.

After the Snowden leak however, it became clear that these principles were blatantly violated on a large scale. It was in this context that Maximilian Schrems, an Austrian law student, approached the Irish Data Protection authority complaining that US laws did not provide adequate protection to users’ private data against surveillance, as required by the Data Protection Directive. The Data Protection Authority dismissed the complaint, and Schrems then chose to appeal to the Irish High Court. The High Court, having heard the petition, chose to refer an important question to the CJEU: whether the 2000 EC decision, which upheld the Safe Harbour Agreement as satisfying the requirements of the EU Data Protection Directive, meant that national data protection authorities were prevented from taking up complaints against transfer of a person’s data as violating the Directive.

The CJEU answered emphatically in the negative, emphasising that a mere finding by the Commission of adequate data protection policy by an external country could not take away the powers of national data protection authorities. The national authority could therefore independently investigate privacy claims against a private US company handling an EU citizen’s data.

The CJEU also found that legislations authorising the interference of state authorities with data handling of private companies had complete overriding effect over the provisions of the Safe Harbour Agreement. This was based on a two-pronged reasoning – firstly, that the data acquired by state agencies was processed in ways above and beyond what was necessary for protecting national security. Secondly, users whose data had been acquired by the authorities had no legal recourse to challenge such an action or have that data erased. For these reasons, it ruled the Safe Harbour Agreement as failing the requirements of the EU Data Protection Directive.

This decision created a fair amount of deliberation regarding what made data transfer from the EU to the US legally valid, since the main legal basis for it had just been struck down. However, the interesting point to note here is that the Agreement is not the only legal basis for such data transfer. Further, for the data transfer to be held illegal, individual handlers of data would now have to be challenged at forums of national data protection authorities to be held as illegal. Thus the decision importantly does not pull a curtain down on all data transfer from EU to US; however, the legal machinery of the Safe Harbour Agreement has rightly been found to be ineffective.

Therefore, while internet companies do not need to shut down operations in EU, they do need to review their data handling practices, and adherence of these practices to other available norms, like the EU’s model clauses for data transfer to external countries. Some companies have even gone a step ahead and tried to come up with solutions to the vacuum left behind by the Safe Harbour Agreement, like Microsoft, as it does in this blog post by the head of its legal department.

That said, the EU has issued a statement that an agreement needs to be reached with US companies by January 2016, failing which it will consider stronger enforcement measures, such as coordinated action taken by each of the EU countries’ data protection authorities. The scenario is still an evolving one, and this shake-up can positively lead to better enforced privacy and data protection principles.

Ministry of Road Transport Issues Advisory on Taxi Aggregator Apps

Written by Siddharth Manohar

The Ministry of Road Transport and Highways’ recent advisory contains detailed guidelines for internet-based taxi aggregators (also known as ride-hailing apps) and their continued operation in India. An interim order by the Delhi High Court in July had banned the operation of such services in Delhi, and this advisory has been welcomed by the companies operating in the market.

Road transport remains a subject under List II of the Constitution, which means that only State governments (as opposed to the Union Government) are allowed to enforce policy regarding any matter falling under it. As highlighted in the statement of the Secretary of the Ministry, Vijay Chhibber, the object of the advisory is to regulate taxi services offered by app-based providers such as Uber and Ola as effectively as regular taxi companies. The advisory, he said, would “clear the air for states to form their own rules, treating them at par with other cab fleet owners”. Being a mere advisory, it has no direct penal consequences even if the service providers do not comply with the guidelines. However, it does act as a model for states to develop their own regulations, along with the power to punish for non-compliance.

The guidelines go some way towards solving certain headaches caused by the application of the Radio Taxi Scheme, 2006 to ride-hailing internet applications. The plea of taxi aggregator companies at the time of the ban had been that their service was essentially different from a regular taxi company and they could not be shoehorned into the older regulatory scheme designed for conventional taxi companies. The drivers, for instance, are not direct employees of the taxi aggregator company. Therefore, the rule in the Radio Taxi Scheme that holds the employer of the driver responsible for the actions and safety of the driver would not be applicable to taxi aggregators.

 To deal with this, the new guidelines specify the required background checks and registration documents to be acquired by the company upon registration of the driver with the service. It also requires the operators to carry out a training program for its registered drivers. This takes into account the difference in operation of these companies but nonetheless sets down the same regulatory standards for the operation of the taxi service.

 The guidelines take the useful step of disallowing drivers registered with the service or vehicles used by them to advertise themselves as a regular taxi vehicle. The document also puts restrictions on the companies in terms of mandating any kind of minimum hours of driving to be registered with the service, and also mandates following rules of maximum number of driving hours, to ensure driving safety. It also prevents companies from restricting drivers registered with the company from registering with other taxi aggregator services. These measure put greater bargaining power in the hands of the drivers registered with the service and avoids anti-competitive practices like preventing a driver from opting for a different service which offers better incentives for completing a certain number of rides on a given day.

 The advisory also contains a host of requirements geared towards passenger safety. Firstly, the comprehensive rules regarding registration and permit documents for the driver and vehicle play a role in boosting accountability. The steps of registration include a police verification of drivers who are to be involved with the service. It disallows the company from registering persons who have been convicted during the past seven years for driving under the influence of alcohol or drugs, or for any cognizable offence, including sexual offences, terror-related offences, and property offences. The guidelines include requirements of detailed information regarding the driver and vehicle to be provided on the platform to users of the service, and for this data along with the location data to be transferred to two trusted contacts, apart from their being transferred to the authorities when required. For this purpose, all vehicles are to be fitted with location tracking technology, apart from standard first aid and safety equipment as prescribed in relevant laws.Further, in case of a complaint of discriminatory practices filed against any driver, it suggests that the driver be suspended from accessing the service until such time as internal investigations regarding the matter are still ongoing.

 The advisory also takes up some useful administrative requirements. It demands that the service operator incorporates an legal entity within India. It makes mandatory an Office address ineach state where the company operates, along with an assigned in-charge, for easier service of notice for purposes such as Court proceedings. It also requires a comprehensive list of all drivers and their associated vehicles and details to be submitted to the Licensing Authority on a monthly basis It mandates that the company must provide for a 24*7 helpline along with a call centre for the same, as well as a web-based portal, for grievances of customers to be communicated.

 Dealing with these different aspects of regulation of taxi-aggregators, the advisory has struck some kind of middle ground between those that complain that these services are bypassing accountability measures and ignoring customer safety, and the companies who argue that while customer safety and quality of service remain important, existing rules of commercial road transport services cannot be applied to them as is. One hopes that the State governments now take up the clear regulatory call sent out by the Ministry and set about framing legislations which enforce the provisions laid down in the advisory document. The Karnataka State department has been reported to come out with the relevant policy soon, and, if so, will be the first instance of authoritative regulation on the matter as a thought-out response.