On 6th October, the European Court of Justice (ECJ/ Court) delivered its much anticipated judgments in the consolidated matter of C-623/17,Privacy International from the UK and joined cases from France, C-511/18, La Quadrature du Net and others, C-512/18, French Data Network and others, and Belgium, C-520/18,Ordre des barreaux francophones et germanophone and others (Collectively “Bulk Communications Surveillance Judgments”).
In this post, I briefly discuss the Bulk Communication Surveillance Judgments, their significance for other countries and for India.
Through these cases, the Court invalidated the disproportionate interference by Member States with the rights of their citizens, as provided by EU law, in particular the Directive on privacy and electronic communications (e-Privacy Directive) and European Union’s Charter of Fundamental Rights (EU Charter). The Court assessed the Member States’ bulk communications surveillance laws and practices relating to their access and use of telecommunications data.
The Court recognised the importance of the State’s positive obligations towards conducting surveillance, although it noted that it was essential for surveillance systems to conform with the general principles of EU law and the rights guaranteed under the EU Charter. It laid down clear principles and measures as to when and how the national authorities could access and use telecommunications data (further discussed in the sections ‘The UK Judgment’ and ‘The French and Belgian Judgment’). It carved a few exceptions as well (in the joined cases of France and Belgium) for emergency situations, but held that such measures would have to pass the threshold of being serious and genuine (further discussed in the section ‘The French and Belgian Judgment’).
The Cases in Brief
The Court delivered two separate judgments, one in the UK case and one in the joined cases of France and Belgium. Since these cases had similar sets of issues, the proceedings were adjoined. The UK application challenged the bulk acquisition and use of telecommunications data by its Security and Intelligence Agencies (SIAs) in the interest of national security (as per the UK’s Telecommunication Act of 1984). The French and Belgian applications challenged the indiscriminate data retention and access by SIAs for combating crime.
The French and Belgian applications questioned the legality of their respective data retention laws (numerous domestic surveillance laws which permitted bulk collection of telecommunication data) that imposed blanket obligations on Electronic Communications Service Providers (ECSP) to provide relevant data. The Belgian law required ECSPs to retain various kinds of traffic and location data for a period of 12 months. Whereas, the French law provided for automated analysis and real time data collection measures for preventing terrorism. The French application also raised the issue of providing a notification to the person under the surveillance.
The Member States contended that such surveillance measures enabled them to inter alia, safeguard national security, prevent terrorism, and combat serious crimes. Hence, they claimed inapplicability of the e-Privacy Directive on their surveillance laws/ activities.
The UK Judgment
The ECJ found the UK surveillance regime unlawful and inconsistent with EU law, and specifically the e-Privacy Directive. The Court analysed the scope and scheme of the e-Privacy Directive with regard to exclusion of certain State purposes such as national and public security, defence, and criminal investigation. Noting the importance of such State purposes, it held that EU Member States could adopt legislative measures that restricted the scope of rights and obligations (Article 5, 6 and 9) provided in the e-Privacy Directive. However, this was allowed only if the Member States complied with the requirements laid down by the Court in Tele2 Sverige and Watson and Others(C-203/15 and C-698/15) (Tele2)andthe e-Privacy Directive. In addition to these, the Court held that the EU Charter must be respected too. In Tele2, the ECJ held that legislative measures obligating ECSPs to retain data must be targeted and limited to what was strictly necessary. Such targeted retention had to be with regard to specific categories of persons and data for a limited time period. Also, the access to data must be subject to a prior review by an independent body.
The e-Privacy Directive ensures the confidentiality of electronic communications and the data relating to it (Article 5(1)). It allows ECSPs to retain metadata (context specific data relating to the users and subscribers, location and traffic) for various purposes such as billing, valued added services and security purposes. However, this data must be deleted or made anonymous, once the purpose is fulfilled unless a law allows for a derogation for State purposes. The e-Privacy Directive allows the Member States to derogate (Article 15(1)) from the principle of confidentiality and corresponding obligations (contained in Article 6 (traffic data) and 9 (location data other than traffic data)) for certain State purposes when it is appropriate, necessary and proportionate.
The Court clarified that measures undertaken for the purpose of national security would not make EU law inapplicable and exempt the Member States from their obligation to ensure confidentiality of communications under the e-Privacy Directive. Hence, an independent review of surveillance activities such as data retention for indefinite time periods, or further processing or sharing, must be conducted for authorising such activities. It was noted that the domestic law at present did not provide for prior review, as a limit on the above mentioned surveillance activities.
The French and Belgian Judgment
While assessing the joined cases, the Court arrived at a determination in similar terms as the UK case. It reiterated that the exception (Article 15(1) of the e-Privacy Directive) to the principle of confidentiality of communications (Article 5(1) of the e-Privacy Directive) should not become the norm. Hence, national measures that provided for general and indiscriminate data retention and access for State purposes were held to be incompatible with EU law, specifically the e-Privacy Directive.
The Court in the joined cases, unlike the UK case, allowed for specific derogations for State purposes such as safeguarding national security, combating serious crimes and preventing serious threats. It laid down certain requirements that the Member States had to comply with in case of derogations. The derogations should (1) be clear and precise to the stated objective (2) be limited to what is strictly necessary and for a limited time period (3) have a safeguards framework including substantive and procedural conditions to regulate such instances (4) include guarantees to protect the concerned individuals against abuse. They should also be subjected to an ‘effective review’ by a court or an independent body and must be in compliance of general rules and proportionality principles of EU law and the rights provided in the EU Charter.
The Court held that in establishing a minimum threshold for a safeguards framework, the EU Charter must be interpreted along with the European Convention on Human Rights (ECHR). This would ensure consistency between the rights guaranteed under the EU Charter and the corresponding rights guaranteed in the ECHR (as per Article 52(3) of the EU Charter).
The Court, in particular, allowed for general and indiscriminate data retention in cases of serious threat to national security. Such a threat should be genuine, and present or foreseeable. Real-time data collection and automated analysis were allowed in such circumstances. But the real-time data collection of persons should be limited to those suspected of terrorist activities. Moreover, it should be limited to what was strictly necessary and subject to prior review. It even allowed for general and indiscriminate data retention of IP addresses for the purpose of national security, combating serious crimes and preventing serious threats to public security. Such retention must be for a limited time period to what was strictly necessary. For such purposes, the Court also permitted ECSPs to retain data relating to the identity particulars of their customers (such as name, postal and email/account addresses and payment details) in a general and indiscriminate manner, without specifying any time limitations.
The Court allowed targeted data retention for the purpose of safeguarding national security and preventing crime, provided that it was for a limited time period and strictly necessary and was done on the basis of objective and non-discriminatory factors. It was held that such retention should be specific to certain categories of persons or geographical areas. The Court also allowed, subject to effective judicial review, expedited data retention after the initial retention period ended, to shed light on serious criminal offences or acts affecting national security. Lastly, in the context of criminal proceedings, the Court held that it was for the Member States to assess the admissibility of evidence resulting from general and indiscriminate data retention. However, the information and evidence must be excluded where it infringes on the right to a fair trial.
Significance of the Bulk Communication Surveillance Judgments
With these cases, the ECJ decisively resolved a long-standing discord between the Member States and privacy activists in the EU. For a while now, the Court has been dealing with questions relating to surveillance programs for national security and law enforcement purposes. Though the Member States have largely considered these programs outside the ambit of EU privacy law, the Court has been expanding the scope of privacy rights.
Placing limitations and controls on State powers in democratic societies was considered necessary by the Court in its ruling in Privacy International. This decision may act as a trigger for considering surveillance reforms in many parts of the world, and more specifically for those aspiring to attain an EU adequacy status. India could benefit immensely should it choose to pay heed.
As of date, India does not have a comprehensive surveillance framework. Various provisions of the Personal Data Protection Bill, 2019 (Bill), Information Technology Act, 2000, Telegraph Act, 1885, and the Code of Criminal Procedure, 1973 provide for targeted surveillance measures. The Bill provides for wide powers to the executive (under Clause 35, 36 and 91 of the Bill) to access personal and non-personal data in the absence of proper and necessary safeguards. This may cause problems for achieving the EU adequacy status as per Article 45 of the EU General Data Protection Regulation (GDPR) that assesses the personal data management rules of third-party countries.
Recent news reports suggest that the Bill, which is under legislative consideration, is likely to undergo a significant overhaul. India could use this as an opportunity to introduce meaningful changes in the Bill as well as its surveillance regime. India’s privacy framework could be strengthened by adhering to the principles outlined in the Justice K.S. Puttaswamy v. Union of Indiajudgment and the Bulk Communications Surveillance Judgments.
Embedding Principles of Privacy, Transparency and Accountability
This post has been authored by Jhalak M. Kakkar and Nidhi Singh
In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Aayog Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation that was held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG’s comments and analysis on the Working Document can be accessed here.
In our first post in the series, ‘Building an AI governance framework for India’, we discussed the legal and regulatory implications of the Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its constitutional framework, and (2) based on clearly articulated overarching ‘Principles for Responsible AI’. Part II of the series discussed specific Principles for Responsible AI – Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. We explored the constituent elements of these principles and the avenues for incorporating them into the Indian regulatory framework.
In this final post of the series, we will discuss the remaining principles of Privacy, Transparency and Accountability.
Principle of Privacy
Given the diversity of AI systems, the privacy risks which they pose to the individuals, and society as a whole are also varied. These may be be broadly related to :
(i) Data protection and privacy: This relates to privacy implications of the use of data by AI systems and subsequent data protection considerations which arise from this use. There are two broad aspects to think about in terms of the privacy implications from the use of data by AI systems. Firstly,AI systems must be tailored to the legal frameworks for data protection. Secondly, given that AI systems can be used to re-identify anonymised data, the mere anonymisation of data for the training of AI systems may not provide adequate levels of protection for the privacy of an individual.
a) Data protection legal frameworks: Machine learning and AI technologies have existed for decades, however, it was the explosion in the availability of data, which accounts for the advancement of AI technologies in recent years. Machine Learning and AI systems depend upon data for their training. Generally, the more data the system is given, the more it learns and ultimately the more accurate it becomes. The application of existing data protection frameworks to the use of data by AI systems may raise challenges.
In the Indian context, the Personal Data Protection Bill, 2019 (PDP Bill), currently being considered by Parliament, contains some provisions that may apply to some aspects of the use of data by AI systems. One such provision is Clause 22 of the PDP Bill, which requires data fiduciaries to incorporate the seven ‘privacy by design’ principles and embed privacy and security into the design and operation of their product and/or network. However, given that AI systems rely significantly on anonymised personal data, their use of data may not fall squarely within the regulatory domain of the PDP Bill. The PDP Bill does not apply to the regulation of anonymised data at large but the Data Protection Authority has the power to specify a code of practice for methods of de-identification and anonymisation, which will necessarily impact AI technologies’ use of data.
b) Use of AI to re-identify anonymised data: AI applications can be used to re-identify anonymised personal data. To safeguard the privacy of individuals, datasets composed of the personal data of individuals are often anonymised through a de-identification and sampling process, before they are shared for the purposes of training AI systems to address privacy concerns. However, current technology makes it possible for AI systems to reverse this process of anonymisation to re-identify people, having significant privacy implications for an individual’s personal data.
(ii) Impact on society: The impact of the use of AI systems on society essentially relates to broader privacy considerations that arise at a societal level due to the deployment and use of AI, including mass surveillance, psychological profiling, and the use of data to manipulate public opinion. The use of AI in facial recognition surveillance technology is one such AI system that has significant privacy implications for society as a whole. Such AI technology enables individuals to be easily tracked and identified and has the potential to significantly transform expectations of privacy and anonymity in public spaces.
Due to the varying nature of privacy risks and implications caused by AI systems, we will have to design various regulatory mechanisms to address these concerns. It is important to put in place a reporting and investigation mechanism that collects and analyses information on privacy impacts caused by the deployment of AI systems, and privacy incidents that occur in different contexts. The collection of this data would allow actors across the globe to identify common threads of failure and mitigate against potential privacy failures arising from the deployment of AI systems.
To this end, we can draw on a mechanism that is currently in place in the context of reporting and investigating aircraft incidents, as detailed under Annexure 13 of the Convention on International Civil Aviation (Chicago Convention). It lays down the procedure for investigating aviation incidents and a reporting mechanism to share information between countries. The aim of this accident investigation report is not to apportion blame or liability from the investigation, but rather to extensively study the cause of the accident and prevent future incidents.
A similar incident investigation mechanism may be employed for AI incidents involving privacy breaches. With many countries now widely developing and deploying AI systems, such a model of incident investigation would ensure that countries can learn from each other’s experiences and deploy more privacy-secure AI systems.
Principle of Transparency
The concept of transparency is a recognised prerequisite for the realisation of ‘trustworthy AI’. The goal of transparency in ethical AI is to make sure that the functioning of the AI system and resultant outcomes are non-discriminatory, fair, and bias mitigating, and that the AI system inspires public confidence in the delivery of safe and reliable AI innovation and development. Additionally, transparency is also important in ensuring better adoption of AI technology—the more users feel that they understand the overall AI system, the more inclined and better equipped they are to use it.
The level of transparency must be tailored to its intended audience. Information about the working of an AI system should be contextualised to the various stakeholder groups interacting and using the AI system. The Institute of Electrical and Electronics Engineers, a global professional organisation of electronic and electrical engineers, suggested that different stakeholder groups may require varying levels of transparency in accordance with the target group. This means that groups such as users, incident investigators, and the general public would require different standards of transparency depending upon the nature of information relevant for their use of the AI system.
Presently, many AI algorithms are black boxes where automated decisions are taken, based on machine learning over training datasets, and the decision making process is not explainable. When such AI systems produce a decision, human end users don’t know how it arrived at its conclusions. This brings us to two major transparency problems, the public perception and understanding of how AI works, and how much developers actually understand about their own AI system’s decision making process. In many cases, developers may not know, or be able to explain how an AI system makes conclusions or how it has arrived at certain solutions.
This results in a lack of transparency. Some organisations have suggested opening up AI algorithms for scrutiny and ending reliance on opaque algorithms. On the other hand, the NITI Working Document is of the view that disclosing the algorithm is not the solution and instead, the focus should be on explaining how the decisions are taken by AI systems. Given the challenges around explainability discussed above, it will be important for NITI Aayog to discuss how such an approach will be operationalised in practice.
While many countries and organisations are researching different techniques which may be useful in increasing the transparency of an AI system, one of the common suggestions which have gained traction in the last few years is the introduction of labelling mechanisms in AI systems. An example of this is Google’s proposal to use ‘Model Cards’, which are intended to clarify the scope of the AI systems deployment and minimise their usage in contexts for which they may not be well suited.
Model cards are short documents which accompany a trained machine learning model. They enumerate the benchmarked evaluation of the working of an AI system in a variety of conditions, across different cultural, demographic, and intersectional groups which may be relevant to the intended application of the AI system. They also contain clear information on an AI system’s capabilities including the intended purpose for which it is being deployed, conditions under which it has been designed to function, expected accuracy and limitations. Adopting model cards and other similar labelling requirements in the Indian context may be a useful step towards introducing transparency into AI systems.
Principle of Accountability
The Principle of Accountability aims to recognise the responsibility of different organisations and individuals that develop, deploy and use the AI systems. Accountability is about responsibility, answerability and trust. There is no one standard form of accountability, rather this is dependent upon the context of the AI and the circumstances of its deployment.
Holding individuals and entities accountable for harm caused by AI systems has significant challenges as AI systems generally involve multiple parties at various stages of the development process. The regulation of the adverse impacts caused by AI systems often goes beyond the existing regimes of tort law, privacy law or consumer protection law. Some degree of accountability can be achieved by enabling greater human oversight. In order to foster trust in AI and appropriately determine the party who is accountable, it is necessary to build a set of shared principles that clarify responsibilities of each stakeholder involved with the research, development and implementation of an AI system ranging from the developers, service providers and end users.
Accountability has to be ensured at the following stages of an AI system:
(i) Pre-deployment: It would be useful to implement an audit process before the AI system is deployed. A potential mechanism for implementing this could be a multi-stage audit process which is undertaken post design, but before the deployment of the AI system by the developer. This would involve scoping, mapping and testing a potential AI system before it is released to the public. This can include ensuring risk mitigation strategies for changing development environments and ensuring documentation of policies, processes and technologies used in the AI system.
Depending on the nature of the AI system and the potential for risk, regulatory guidelines can be developed prescribing the involvement of various categories of auditors such as internal, expert third party and from the relevant regulatory agency, at various stages of the audit. Such audits which are conducted pre-deployment are aimed at closing the accountability gap which exists currently.
(ii) During deployment: Once the AI system has been deployed, it is important to keep auditing the AI system to note the changes being made/evolution happening in the AI system in the course of its deployment. AI systems constantly learn from the data and evolve to become better and more accurate. It is important that the development team is continuously monitoring the system to capture any errors that may arise, including inconsistencies arising from input data or design features, and address them promptly.
(iii) Post-deployment: Ensuring accountability post-deployment in an AI system can be challenging. The NITI Working Document also recognised that assigning accountability for specific decisions becomes difficult in a scenario with multiple players in the development and deployment of an AI system. In the absence of any consequences for decisions harming others, no one party would feel obligated to take responsibility or take actions to mitigate the effect of the AI systems. Additionally, the lack of accountability also leads to difficulties in grievance redressal mechanisms which can be used to address scenarios where harm has arisen from the use of AI systems.
The Council of Europe, in its guidelines on the human rights impacts of algorithmic systems, highlighted the need for effective remedies to ensure responsibility and accountability for the protection of human rights in the context of the deployment of AI systems. A potential model for grievance redressal is the redressal mechanism suggested in the AI4People’s Ethical Framework for a Good Society report by the Atomium – European Institute for Science, Media and Democracy. The report suggests that any grievance redressal mechanism for AI systems would have to be widely accessible and include redress for harms inflicted, costs incurred, and other grievances caused by the AI system. It must demarcate a clear system of accountability for both organisations and individuals. Of the various redressal mechanisms they have suggested, two significant mechanisms are:
(a) AI ombudsperson: This would ensure the auditing of allegedly unfair or inequitable uses of AI reported by users of the public at large through an accessible judicial process.
(b) Guided process for registering a complaint: This envisions laying down a simple process, similar to filing a Right to Information request, which can be used to bring discrepancies, or faults in an AI system to the notice of the authorities.
Such mechanisms can be evolved to address the human rights concerns and harms arising from the use of AI systems in India.
Conclusion
In early October, the Government of India hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and a roadmap for social transformation, inclusion and empowerment through Responsible AI. At the RAISE Summit, speakers underlined the need for adopting AI ethics and a human centred approach to the deployment of AI systems. However, this conversation is still at a nascent stage and several rounds of consultations may be required to build these principles into an Indian AI governance and regulatory framework.
As India enters into the next stage of developing and deploying AI systems, it is important to have multi-stakeholder consultations to discuss mechanisms for the adoption of principles for Responsible AI. This will enable the framing of an effective governance framework for AI in India that is firmly grounded in India’s constitutional framework. While the NITI Aayog Working Document has introduced the concept of ‘Responsible AI’ and the ethics around which AI systems may be designed, it lacks substantive discussion on these principles. Hence, in our analysis, we have explored global views and practices around these principles and suggested mechanisms appropriate for adoption in India’s governance framework for AI.Ourdetailed analysis of these principles can be accessed in our comments to the NITI Aayog’s Working Document Towards Responsible AI for All.
India is in the midst of establishing a robust data governance framework, which will impact the rights and liabilities of all key stakeholders – the government, private entities, and citizens at large. As a parliamentary committee debates its first personal data protection legislation (‘PDPB 2019’), proposals for the regulation of non-personal data and a data empowerment and protection architecture are already underway.
As data processing capabilities continue to evolve at a feverish pace, basic data protection regulations like the PDPB 2019 might not be sufficient to address new challenges. For example, big data analytics renders traditional notions of consent meaningless as users have no knowledge of how such algorithms behave and what determinations are made about them by such technology.
Creative data governance models, which are aimed at reversing the power dynamics in the larger data economy are the need of the hour. Recognising these challenges policymakers are driving the conversation on data governance in the right direction. However, they might be missing out on crucial experiments being run in other parts of the world.
As users of digital products and services increasingly lose control over data flows, various new models of data governance are being recommended for example, data trusts, data cooperatives, and data commons. Out of these, one of the most promising new models of data governance is – data trusts.
(For the purposes of this blog post, I’ll be using the phrase data processors as an umbrella term to cover data fiduciaries/controllers and data processors in the legal sense. The word users is meant to include all data principals/subjects.)
What are data trusts?
Though there are various definitions of data trusts, one which is helpful in understanding the concept is – ‘data trusts are intermediaries that aggregate user interests and represent them more effectively vis-à-vis data processors.’
To solve the information asymmetries and power imbalances between users and data processors, data trusts will act as facilitators of data flow between the two parties, but on the terms of the users. Data trusts will act in fiduciary duty and in the best interests of its members. They will have the requisite legal and technical knowledge to act on behalf of users. Instead of users making potentially ill-informed decisions over data processing, data trusts will make such decisions on their behalf, based on pre-decided factors like a bar on third-party sharing, and in their best interests. For example, data trusts to users can be what mutual fund managers are to potential investors in capital markets.
Currently, in a typical transaction in the data economy, if users wish to use a particular digital service, neither do they have the knowledge to understand the possible privacy risks nor the negotiation powers for change. Data trusts with a fiduciary responsibility towards users, specialised knowledge, and multiple members might be successful in tilting back the power dynamics in favour of users. Data trusts might be relevant from the perspective of both the protection and controlled sharing of personal as well as non-personal data.
(MeitY’s Non-Personal Data Governance Framework introduces the concept of data trustees and data trusts in India’s larger data governance and regulatory framework. But, this applies only to the governance of ‘non-personal data’ and not personal data, as being recommended here. CCG’s comments on MeitY’s Non-Personal Data Governance Framework, can be accessed – here)
Challenges with data trusts
Though creative solutions like data trusts seem promising in theory, they must be thoroughly tested and experimented with before wide-scale implementation. Firstly, such a new form of trusts, where the subject matter of the trust is data, is not envisaged by Indian law (see section 8 of the Indian Trusts Act, 1882, which provides for only property to be the subject matter of a trust). Current and even proposed regulatory structures don’t account for the regulation of institutions like data trusts (the non-personal data governance framework proposes data trusts, but only as data sharing institutions and not as data managers or data stewards, as being suggested here). Thus, data trusts will need to be codified into Indian law to be an operative model.
Secondly, data processors might not embrace the notion of data trusts, as it may result in loss of market power. Larger tech companies, who have existing stores of data on numerous users may not be sufficiently incentivised to engage with models of data trusts. Structures will need to be built in a way that data processors are incentivised to participate in such novel data governance models.
Thirdly, the business or operational models for data trusts will need to be aligned to their members i.e. users. Data trusts will require money to operate – for profit entities may not have the best interests of users in mind. Subscription based models, whether for profit or not, might fail as users are habitual to free services. Donation based models might need to be monitored closely for added transparency and accountability.
Lastly, other issues like creation of technical specifications for data sharing and security, contours of consent, and whether data trusts will help in data sharing with the government, will need to be accounted for.
Privacy centric data governance models
At this early stage of developing data governance frameworks suited to Indian needs, policymakers are at a crucial juncture of experimenting with different models. These models must be centred around the protection and preservation of privacy rights of Indians, both from private and public entities. Privacy must also be read in its expansive definition as provided by the Supreme Court in JusticeK.S. Puttaswamy vs. Union of India. The autonomy, choice, and control over informational privacy are crucial to the Supreme Court’s interpretation of privacy.
(CCG’s privacy law database that tracks privacy jurisprudence globally and currently contains information from India and Europe, can be accessed – here)
In Greek mythology Argus Panoptes was a many-eyed, all-seeing, and always awake, giant whose reference has been used to depict an imagery of excessive scrutiny and surveillance. Jeremy Bentham used this reference when he designed the panopticon prison where prisoners would be monitored without them being in the know. Later, Michel Foucault used the panopticon to elaborate on the social theory of panopticism where the watcher ceases to be external to the watched, resulting in internal surveillance or a ‘chilling’ effect. This idea of “panopticism” has gained renewed relevance in the age of digital surveillance.
Amongst the many cutting edge surveillance technologies being adopted globally, ‘Facial Recognition Technology’ (FRT) is one of the most rapidly deployed. ‘Live Facial Recognition Technology’ (LFRT) or ‘Real-time Facial Recognition Technology’, its augmentation, has increasingly become more effective in the past few years. Improvements in computational power and algorithms have enabled cameras placed at odd angles to detect faces even in motion. This post attempts to explore the issues with increasing State use of FRT around the world and the legal framework surrounding it.
What do FRT and LFRT mean?
FRT refers to the usage of algorithms for uniquely detecting, recognising, or verifying a person using recorded images, sketches, videos (which contain their face). The data about a particular face is generally known as the face template. This template is a mathematical representation of a person’s face, which is created by using algorithms that mark and map distinct features on the captured image like eye locations or the length of a nose. These face templates create the biometric database against which new images, sketches, videos, etc. are compared to verify or recognise the identity of a person. As opposed to the application of FRT, which is conducted on pre-recorded images and videos, LFRT involves real-time automated facial recognition of all individuals in the camera field’s vision. It involves biometric processing of images of all the passers-by using an existing database of images as a reference.
The accuracy of FRT algorithms is significantly impacted by factors like distance and angle from which the image was captured or poor lighting conditions. These problems are worsened in LFRT as the images are not captured in a controlled setting, with the subjects in motion, rarely looking at the camera, and often positioned at odd angles from it.
Despite claims of its effectiveness, there has been growingscepticism about the use of FRT. Its use has been linked with misidentification of people of colour, ethinic minorities, women, and trans people. The prevalent use of FRT may not only affect the privacy rights of such communities, but all those who are surveilled at large.
The Prevalence of FRT
While FRT has become ubiquitous, LFRT is still in the process of being adopted in countries like the UK, USA, India, and Singapore. The COVID-19 pandemic has further accelerated the adoption of FRT as a way to track the virus’ spread and to build on contactless biometric-based identification systems. For example, in Moscow, city officials were using a system of tens of thousands of cameras equipped with FRT, to check for social distancing measures, usage of face masks, and adherence to quarantine rules to contain the spread of COVID-19.
FRT is also being steadily deployed for mass surveillance activities, which is often in violation of universally accepted principles of human rights such as necessity and proportionality. These worries have come to the forefront recently with the State use of FRT to identify people participating in protests. For example, FRT was used by law enforcement agencies to identify prospective law breakers during protests in Hong Kong, protests concerning the Citizenship Amendment Act, 2019 in New Delhi and the Black Lives Matter protests across the USA.
Vociferous demands have been made by civil society and digital rights groups for a globalmoratorium on the pervasive use of FRT that enables mass surveillance, as many cities such as Boston and Portland have banned its deployment. However, it remains to be seen how effective these measures are in halting the use of FRT. Even the temporary refusal by Big Tech companies to sell FRT to police forces in the US does not seem to have much instrumental value – as other private companies continue its unhindered support.
Regulation of FRT
The approach to the regulation of FRT differs vastly across the globe. The regulation spectrum on FRT ranges from permissive use of mass surveillance on citizens in countries like China and Russia to a ban on the use of FRT for example in Belgium and Boston (in USA). However, in many countries around the world, including India, the use of FRT continues unabated, worryingly in a regulatory vacuum.
Recently, an appellate court in the UK declared the use of LFRT for law enforcement purposes as unlawful, on grounds of violation of the rights of data privacy and equality. Despite the presence of a legal framework in the UK for data protection and the use of surveillance cameras, the Court of Appeal held that there was no clear guidance on the use of the technology and it gave excessive discretion to the police officers.
The EU has been contemplating a moratorium on the use of FRT in public places. Civil society in the EU is demanding a comprehensive and indefinite ban on the use of FRT and related technology for mass surveillance activities.
In the USA, several orders banning or heavily regulating the use of FRT have been passed. A federal law banning the use of facial recognition and biometric technology by law enforcement has been proposed. The bill seeks to place a moratorium on the use of facial recognition until Congress passes a law to lift the temporary ban. It would apply to federal agencies such as the FBI, as well as local and State police departments.
The Indian Scenario
In July 2019, the Government of India announced its intentions of setting up a nationwide facial recognition system. The National Crime Bureau (NCRB) – a government agency operating under the Ministry of Home Affairs – released a request for proposal (RFP) on July 4, 2019 to procure a National Automated Facial Recognition System (AFRS). The deadline for submission of tenders to the RFP has been extended 11 times since July 2019. The stated aim of the AFRS is to help modernise the police force, information gathering, criminal identification, verification, and its dissemination among various police organisations and units across the country.
Security forces across the states and union territories will have access to the centralised database of AFRS, which will assist in the investigation of crimes. However, civil society organisations have raised concerns regarding privacy and issues of increased surveillance by the State as AFRS does not have a legal basis (statutory or executive) and lacks procedural safeguards and accountability measures like an oversight regulatory authority. They have also questioned the accuracy of FRT in identifying darker skinned women and ethnic minorities and expressed fears of discrimination.
This is in addition to the FRT already in use by law enforcement agencies in Chennai, Hyderabad, Delhi, and Punjab. There are several instances of deployment of FRT in India by the government in the absence of a specific law regulating FRT or a general data protection law.
Even the proposed Personal Data Protection Bill, 2019 is unlikely to assuage privacy challenges arising from the use of FRT by the Indian State. The primary reason for this is the broad exemptions provided to intelligence and law enforcement agencies under Clause 35 of the Bill on grounds of sovereignty and integrity, security of the State, public order, etc.
After the judgement of K.S. Puttaswamy vs. Union of India (Puttaswamy I), which reaffirmed the fundamental right to privacy in India, any act of State surveillance breaches the right to privacy and will need to adhere to the three part test laid down in Puttaswamy I.
The three prongs of the test are – legality, which postulates the existence of law along with procedural safeguards; necessity, defined in terms of a legitimate State aim; and proportionality which ensures a rational nexus between the objects and the means adopted to achieve them. This test was also applied in the Aadhaar case (Puttaswamy II) to the use of biometrics technology.
It may be argued that State use of FRT is for the legitimate aim of ensuring national security, but currently its use is neither sanctioned by law, nor does it pass the test of proportionality. For proportionate use of FRT, the State will need to establish that there is a rational nexus between its use and the purpose sought to be achieved and that the use of such technology is the least privacy restrictive measure to achieve the intended goals. As the law stands today in India after Puttaswamy I and II, any use of FRT or LFRT currently is prima facie unconstitutional.
While mass surveillance is legally impermissible in India, targeted surveillance is allowed under Section 5 of the Indian Telegraph Act, 1885, read with rule 419A of the Indian Telegraph Rules, 1951 and Section 69 of the Information and Technology Act, 2000 (IT Act). Even the constitutionality of Section 69 of the IT Act has been challenged and is currently pending before the Supreme Court.
Puttaswamy I has clarified that the protection of privacy is not completely lost or surrendered in a public place as it is attached to the person. Hence, the constitutionality of India’s surveillance apparatus needs to be assessed from the standards laid down by Puttaswamy I. To check unregulated mass surveillance through the deployment of FRT by the State, there is a need to restructure the overall surveillance regime in the country. Even the Justice Srikrishna Committee report in 2018 – highlighted that several executive sanctioned intelligence-gathering activities of law enforcement agencies would be illegal after Puttaswamy I as they do not operate under any law.
The need for reform of surveillance laws, in addition to a data protection law in India to safeguard fundamental rights and civil liberties, cannot be stressed enough. The surveillance law reform will have to focus on the use of new technologies like FRT and regulate its deployment with substantive and procedural safeguards to prevent abuse of human rights and civil liberties and provide for relief.
Well documented limitations of FRT and LFRT in terms of low accuracy rates, along with concerns of profiling and discrimination, make it essential for the surveillance law reform to have additional safeguards such as mandatory accuracy and non-discrimination audits. For example, the National Institute of Standards and Technology (NIST), US Department of Commerce, 2019 Face Recognition Vendor Test (part three) evaluates whether an algorithm performs differently across different demographics in a dataset. The need of the hour is to cease the use of FRT and put a temporary moratorium on any future deployments till surveillance law reforms with adequate proportionality safeguards have been implemented.
The Centre’s Non Personal Data Report proposes a policy framework to regulate the use of anonymised data used by Big Tech companies. The question now is: how well do its recommendations meet up to the challenges of regulating non-personal data, amidst a regulatory lacuna for the same? Shashank Mohan of the Centre for Communication Governance explores how concepts of collective privacy and data trusts lie at the forefront of India’s future frameworks for digital governance.
By Shashank Mohan
This post first appeared on The Bastion on September 13, 2020
In the past few years, it has become common knowledge that Big Tech companies like Facebook, Google, and Amazon rely on the exploitation of user data to offer seemingly free services. These companies typically use business models that rely on third party advertising to profit off this data. In exchange for their services, we hand over our data without much control or choice in the transaction.
In response to the privacy threats posed by such business models, countries around the world have been strengthening and enacting data privacy laws. India is currently debating its own personal data protection law, which is loosely based on the benchmark EU data protection law–the General Data Protection Regulation (GDPR). More recently, attention has shifted to the regulation of non-personal data as well. The Indian Government recently released a report on the Non-Personal Data Governance Framework (NPD Report).
But, why do we need to regulate non-personal data?
While progress on the regulation of personal data is necessary and laudable, in the era of Big Data and machine learning, tech companies no longer need to solely rely on processing our personally identifiable data (personal data) to profile or track users. With newer developments in data analytics, they can find patterns and target us using seemingly innocuous data that may be aggregated or anonymised, but doesn’t need to be identifiable.
For example, they only need to know that I am a brown male in the age range of 25-35, from New Delhi, looking for shoes, and not necessarily my name or my phone number. All of this is “non-personal” data as it’s not linked to my personal identity.
Clearly, tech companies extract value from their service offerings using advanced data analytics and machine learning algorithms which rummage through both personal and non-personal data. This shift to harnessing non-identifiable/anonymised/aggregated data creates a lacuna in the governance of data, as traditionally, data protection laws like the GDPR have focused on identifiable data and giving an individual control over their personal data.
So, among other economic proposals, the NPD Report proposes a policy framework to regulate such anonymised data, to fill this lacuna. The question now is: how well do its recommendations meet up to the challenges of regulating non-personal data?
How Does The Government Define Non-Personal Data?
The NPD Report proposes the regulation of non-personal data, which it defines as data that is never related to an identifiable person, such as data on weather conditions, or personal (identifiable) data which has been rendered anonymous by applying certain technological techniques (such as data anonymisation). The report also recommends the mandatory cross-sharing of this non-personal data between companies, communities of individuals, and the government. The purpose for which this data may be mandated to be shared falls under three broad buckets: national security, community benefit, and promoting market competition.
However, if such data is not related to an identifiable individual, then how can it be protected under personal data privacy laws?
To address these challenges in part, the report introduces two key concepts: collective privacy and data trusts.
The NPD Report defines collective privacy as a right emanating from a community or group of people that are bound by common interests and purposes. It recommends that communities or a group of people exercise control over their non-personal data–which is distinct from an individual exercising control of their personal data–and do so via an appropriate nominee called a data trustee, who would exercise their privacy rights on behalf of the entire community. These two interconnected concepts of collective privacy and data trusteeship merit deeper exploration, due to their significant impact on how we view privacy rights in the digital age.
What is Collective Privacy and How Shall We Protect It?
The concept of collective privacy shifts the focus from an individual controlling their privacy rights, to a group or a community having data rights as a whole. In the age of Big Data analytics, the NPD Report does well to discuss the risks of collective privacy harms to groups of people or communities. It is essential to look beyond traditional notions of privacy centered around an individual, as Big Data analytical tools rarely focus on individuals, but on drawing insights at the group level, or on “the crowd” of technology users.
In a revealing example from 2013, data processors who accessed New York City’s taxi trip data (including trip dates and times) were able to infer with a degree of accuracy whether a taxi driver was a devout Muslim or not, even though data on the taxi licenses and medallion numbers had been anonymised. Data processors linked pauses in taxi trips with adherence to regularly timed prayer timings to arrive at their conclusion. Such findings and classifications may result in heightened surveillance or discrimination for such groups or communities as a whole.
An example of such a community in the report itself is of people suffering from a socially stigmatised disease who happen to reside in a particular locality in a city. It might be in the interest of such a community to keep details about their ailment and residence private, as even anonymised data pointing to their general whereabouts could lead to harassment and the violation of their privacy.
In such cases, harms arise not specifically to an individual, but to a group or community as a whole. Even if data is anonymised (and rendered completely un-identifiable), insights drawn at a group level help decipher patterns and enable profiling at the macro level.
However, the community suffering from the disease might also see some value in sharing limited, anonymised data on themselves with certain third parties; for example, with experts conducting medical research to find a cure to the disease. Such a group may nominate a data trustee–as envisioned by the NPD Report–who facilitates the exchange of non-personal data on their behalf, and takes their privacy interests into account with relevant data processors.
This model of data trusteeship is thus clearly envisioned as a novel intermediary relationship–distinct from traditional notions of a legal trust or trustee for the management of property–between users and data trustees to facilitate the proper exchange of data, and protect users against privacy harms like large-scale profiling and behavioral manipulation.
But, what makes data trusts unique?
Are Data Trusts the New ‘Mutual Funds’?
Currently, data processors process a wide-range of data–both personal and non-personal–about users, without providing them accessible information about how they use or collect it. These users, if they wish to use services offered by data processors, do not have any negotiating powers over the collection or processing of their data. This results in information asymmetries and power imbalances between both parties, without much recourse to users–especially in terms of non-personal data which is not covered by personal data protection laws like the GDPR, or India’s Draft Personal Data Protection Bill.
Data trusts can help solve the challenges arising during everyday data transactions taking place on the Internet. Acting as experts on behalf of users, they may be in a better position to negotiate for privacy-respecting practices as compared to individual users. By standardising data sharing practices like data anonymisation and demanding transparency in data usage, data trusts may also be better placed to protect collective privacy rights as compared to an unstructured community. One of the first recommendations to establish data trusts in the public fora came from the UK Government’s independent report from 2017, ‘Growing the artificial intelligence industry in the UK’, which recommended the establishment of data trusts for increased access to data for AI systems.
Simply put: data trusts might be akin to mutual fund managers, as they facilitate complex investments on behalf of and in the best interests of their individual investors.
The Fault in Our Data Sarkaar
Since data trusts are still untested at a large scale, certain challenges need to be anticipated at the time of their conceptualisation, which the NPD Report does not take account of.
For example, in some cases, the report suggests that the role of the data trustee could be assumed by an arm of the government. The Ministry of Health and Family Welfare, for instance, could act as a trustee for all data on diabetes for Indian citizens.
However, the government acting as a data trustee raises important questions of conflict of interest–after all, government agencies might utilise relevant non-personal data for the profiling of citizens. The NPD Report doesn’t provide solutions for such challenges.
Additionally, the NPD Report doesn’t clarify the ambiguity in the relationship between data trusts and data trustees, adding to the complexity of its recommendations. While the report envisions data trusts as institutional structures purely for the sharing of given data sets, it defines data trustees as agents of ‘predetermined’ communities who are tasked with protecting their data rights.
Broadly, this is just like how commodities (like stocks or gold) are traded over an exchange (such as data trusts) while agents such as stockbrokers (or data trustees) assist investors in making their investments. This is distinct from how Indian law treats traditional conceptions of trusts and trustees, and might require fresh law for its creation.
In terms of the exchange of non-personal data, possibly both these tasks–that is, facilitating data sharing and protecting data rights of communities/groups–can be delegated to just one entity: data trusts. Individuals who do not form part of any ‘predetermined’ community–and thus may not find themselves represented by an appropriate trustee–may also benefit from such hybrid data trusts for the protection of their data rights.
Clearly, multiple cautionary steps need to be in place for data trusts to work, and for the privacy of millions to be protected–steps yet to be fully disclosed in the Report.
Firstly, there is a need for legal and regulatory mechanisms that will ensure that these trusts genuinely represent the best interests of their members. Without a strong alignment with regulatory policies, data trusts might enable the further exploitation of data, rather than bringing about reforms in data governance. Borrowing from traditional laws on trusts, a genuine representation of interests can be ensured by placing a legal obligation–in the form of an enforceable trust deed– on the trust of a fiduciary duty (or duty of care) towards its members.
Secondly, data trusts will require money to operate, and developing funding models that ensure the independence of trusts and also serve their members’ best interests. Various models will need to be tested before implementation, including government funded data trusts and user-subscription based systems.
Thirdly, big questions about the transparency of data trusts remain. As these institutions may be the focal point of data exchange in India, ensuring their independence and accountability will be crucial. Auditing, continuous reviews, and reporting mechanisms will need to be enmeshed in future regulation to ensure the accountability of data trusts.
Privacy Rights Must Be Paramount
As the law tries to keep pace with technology in India, recognising new spheres which require immediate attention, like the challenges of collective privacy, becomes pertinent for policymakers. The NPD Report takes momentous strides in recognising some of these challenges which require swift redressal, but fails to take into consideration emerging scholarship on the autonomy, transparency, and strength of its proposed data trusts.
For example, large data processors will need to be incentivised to engage with data trusts. Smaller businesses may engage with data trusts easily considering the newfound easy access to large amounts of data. But, it might be difficult to incentivise Big Tech companies to engage with such structures, due to their existing stores of wide-scale data on millions of users. This is where the government will need to go back to the drawing board and engage with multiple stakeholders to ensure that innovation goes hand in hand with a privacy respecting data governance framework. Novel solutions like data trusts should be tested with pilot projects, before being baked into formal policy or law.
More than three years after India’s Supreme Court reaffirmed the right to privacy as intrinsic to human existence and a guarantee under the Indian Constitution, government policy continues to treat data–whether personal or non-personal–as a resource to be ‘mined’. In this atmosphere, to meaningfully recognise the right to privacy and self-determination, the government must lay down a data governance framework which seeks to protect the rights of users (or data providers), lays down principles of transparency and accountability, and establishes strong institutions for enforcement of the law.
(This post is in context of the report released by the Committee of Experts on Personal Data Governance Framework, as constituted by the Ministry of Electronics and Information Technology. CCG’s comments on the report can be accessed here)
The Personal Data Protection Bill, 2019 (PDP Bill/ Bill) was introduced in the Lok Sabha on December 11, 2019 , and was immediately referred to a joint committee of the Parliament. The joint committee published a press communique on February 4, 2020 inviting comments on the Bill from the public.
The Bill is the successor to the Draft Personal Data Protection Bill 2018 (Draft Bill 2018), recommended by a government appointed expert committee chaired by Justice B.N. Srikrishna. In August 2018, shortly after the recommendations and publication of the draft Bill, the Ministry of Electronics and Information Technology (MeitY) invited comments on the Draft Bill 2018 from the public. (Our comments are available here.)[1]
In this post we undertake a preliminary examination of:
The scope and applicability of the PDP Bill
The application of general data protection principles
The rights afforded to data subjects
The exemptions provided to the application of the law
In future posts in the series we will examine the Bill and look at the:
The restrictions on cross border transfer of personal data
The structure and functions of the regulatory authority
The enforcement mechanism and the penalties under the PDP Bill
Scope and Applicability
The Bill identifies four different categories of data. These are personal data, sensitive personal data, critical personal data and non-personal data
Personal data is defined as “data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, whether online or offline, or any combination of such features with any other information, and shall include any inference drawn from such data for the purpose of profiling”. (emphasis added)
The addition of inferred data in the definition realm of personal data is an interesting reflection of the way the conversation around data protection has evolved in the past few months, and requires further analysis.
Sensitive personal data is defined as data that may reveal, be related to or constitute a number of different categories of personal data, including financial data, health data, official identifiers, sex life, sexual orientation, genetic data, transgender status, intersex status, caste or tribe, and religious and political affiliations / beliefs. In addition, under clause 15 of the Bill the Central Government can notify other categories of personal data as sensitive personal data in consultation with the Data Protection Authority and the relevant sectoral regulator.
Similar to the 2018 Bill, the current bill does not define critical personal data and clause 33 provides the Central Government the power to notify what is included under critical personal data. However, in its report accompanying the 2018 Bill, the Srikrishna committee had referred to some examples of critical personal data that relate to critical state interest like Aadhaar number, genetic data, biometric data, health data, etc.
The Bill retains the terminology introduced in the 2018 Draft Bill, referring to data controllers as ‘data fiduciaries’ and data subjects ‘data principals’. The new terminology was introduced with the purpose of reflecting the fiduciary nature of the relationship between the data controllers and subjects. However, whether the use of the specific terminology has more impact on the protection and enforcement of the rights of the data subjects still needs to be seen.
Application of PDP Bill 2019
The Bill is applicable to (i) the processing of any personal data, which has been collected, disclosed, shared or otherwise processed in India; (ii) the processing of personal data by the Indian government, any Indian company, citizen, or person/ body of persons incorporated or created under Indian law; and (iii) the processing of personal data in relation to any individuals in India, by any persons outside of India.
The scope of the 2019 Bill, is largely similar in this context to that of the 2018 Draft Bill. However, one key difference is seen in relation to anonymised data. While the 2018 Draft Bill completely exempted anonymised data from its scope, the 2019 Bill does not apply to anonymised data, except under clause 91 which gives the government powers to mandate the use and processing of non-personal data or anonymised personal data under policies to promote the digital economy. There are a few concerns that arise in context of this change in treatment of anonymised personal data. First, there are concerns on the concept of anonymisation of personal data itself. While the Bill provides that the Data Protection Authority (DPA) will specify appropriate standards of irreversibility for the process of anonymisation, it is not clear that a truly irreversible form of anonymisation is possible at all. In this case, we need more clarity on what safeguards will be applicable for the use of anonymised personal data.
Second, is the Bill’s focus on the promotion of the digital economy. We have previously discussed some of the concerns regarding focus on the promotion of digital economy in a rights based legislation inour comments to the Draft Bill 2018.
These issues continue to be of concern, and are perhaps heightened with the introduction of a specific provision on the subject in the 2019 Bill (especially without adequate clarity on what services or policy making efforts in this direction, are to be informed by the use of anonymised personal data). Many of these issues are also still under discussion by thecommittee of experts set up to deliberate on data governance framework (non-personal data). The mandate of this committee includes the study of various issues relating to non-personal data, and to make specific suggestions for consideration of the central government on regulation of non-personal data.
The formation of the non-personal data committee was in pursuance of a recommendation by the Justice Srikrishna Committee to frame a legal framework for the protection of community data, where the community is identifiable. The mandate of the expert committee will overlap with the application of clause 91(2) of the Bill.
Data Fiduciaries, Social Media Intermediaries and Consent Managers
Data Fiduciaries
As discussed above the Bill categorises data controllers as data fiduciaries and significant data fiduciaries. Any person that determines the purpose and means of processing of personal data, (including the State, companies, juristic entities or individuals) is considered a data fiduciary. Some data fiduciaries may be notified as ‘significant data fiduciaries’, on the basis of factors such as the volume and sensitivity of personal data processed, the risks of harm etc. Significant data fiduciaries are held to higher standards of data protection. Under clauses 27-30, significant data fiduciaries are required to carry out data protection impact assessments, maintain accurate records, audit policy and the conduct of its processing of personal data and appoint a data protection officer.
Social Media Intermediaries
The Bill introduces a distinct category of intermediaries called social media intermediaries. Under clause 26(4) a social media intermediary is ‘an intermediary who primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services’. Intermediaries that primarily enable commercial or business-oriented transactions, provide access to the Internet, or provide storage services are not to be considered social media intermediaries.
Social media intermediaries may be notified to be significant data fiduciaries, if they have a minimum number of users, and their actions have or are likely to have a significant impact on electoral democracy, security of the State, public order or the sovereignty and integrity of India.
Under clause 28 social media intermediaries that have been notified as a significant data fiduciaries will be required to provide for voluntary verification of users to be accompanied with a demonstrable and visible mark of verification.
Consent Managers
The Bill also introduces the idea of a ‘consent manager’ i.e. a (third party) data fiduciary which provides for management of consent through an ‘accessible, transparent and interoperable platform’. The Bill does not contain any details on how consent management will be operationalised, and only states that these details will be specified by regulations under the Bill.
Data Protection Principles and Obligations of Data Fiduciaries
Consent and grounds for processing
The Bill recognises consent as well as a number of other grounds for the processing of personal data.
Clause 11 provides that personal data shall only be processed if consent is provided by the data principal at the commencement of processing. This provision, similar to the consent provision in the 2018 Draft Bill, draws from various principles including those under the Indian Contract Act, 1872 to inform the concept of valid consent under the PDP Bill. The clause requires that the consent should be free, informed, specific, clear and capable of being withdrawn.
Moreover, explicit consent is required for the processing of sensitive personal data. The current Bill appears to be silent on issues such as incremental consent which were highlighted in our comments in the context of the Draft Bill 2018.
The Bill provides for additional grounds for processing of personal data, consisting of very broad (and much criticised) provisions for the State to collect personal data without obtaining consent. In addition, personal data may be processed without consent if required in the context of employment of an individual, as well as a number of other ‘reasonable purposes’. Some of the reasonable purposes, which were listed in the Draft Bill 2018 as well, have also been a cause for concern given that they appear to serve mostly commercial purposes, without regard for the potential impact on the privacy of the data principal.
In a notable change from the Draft Bill 2018, the PDP Bill, appears to be silent on whether these other grounds for processing will be applicable in relation to sensitive personal data (with the exception of processing in the context of employment which is explicitly barred).
Other principles
The Bill also incorporates a number of traditional data protection principles in the chapter outlining the obligations of data fiduciaries. Personal data can only be processed for a specific, clear and lawful purpose. Processing must be undertaken in a fair and reasonable manner and must ensure the privacy of the data principal – a clear mandatory requirement, as opposed to a ‘duty’ owed by the data fiduciary to the data principal in the Draft Bill 2018 (this change appears to be in line with recommendations made in multiple comments to the Draft Bill 2018 by various academics, including our own).
Purpose and collection limitation principles are mandated, along with a detailed description of the kind of notice to be provided to the data principal, either at the time of collection, or as soon as possible if the data is obtained from a third party. The data fiduciary is also required to ensure that data quality is maintained.
A few changes in the application of data protection principles, as compared to the Draft Bill 2018, can be seen in the data retention and accountability provisions.
On data retention, clause 9 of the Bill provides that personal data shall not be retained beyond the period ‘necessary’ for the purpose of data processing, and must be deleted after such processing, ostensibly a higher standard as compared to ‘reasonably necessary’ in the Draft Bill 2018. Personal data may only be retained for a longer period if explicit consent of the data principal is obtained, or if retention is required to comply with law. In the face of the many difficulties in ensuring meaningful consent in today’s digital world, this may not be a win for the data principal.
Clause 10 on accountability continues to provide that the data fiduciary will be responsible for compliance in relation to any processing undertaken by the data fiduciary or on its behalf. However, the data fiduciary is no longer required to demonstrate such compliance.
Rights of Data Principals
Chapter V of the PDP Bill 2019 outlines the Rights of Data Principals, including the rights to access, confirmation, correction, erasure, data portability and the right to be forgotten.
Right to Access and Confirmation
The PDP Bill 2019 makes some amendments to the right to confirmation and access, included in clause 17 of the bill. The right has been expanded in scope by the inclusion of sub-clause (3). Clause 17(3) requires data fiduciaries to provide data principals information about the identities of any other data fiduciaries with whom their personal data has been shared, along with details about the kind of data that has been shared.
This allows the data principal to exert greater control over their personal data and its use. The rights to confirmation and access are important rights that inform and enable a data principal to exercise other rights under the data protection law. As recognized in the Srikrishna Committee Report, these are ‘gateway rights’, which must be given a broad scope.
Right to Erasure
The right to correction (Clause 18) has been expanded to include the right to erasure. This allows data principals to request erasure of personal data which is not necessary for processing. While data fiduciaries may be allowed to refuse correction or erasure, they would be required to produce a justification in writing for doing so, and if there is a continued dispute, indicate alongside the personal data that such data is disputed.
The addition of a right to erasure, is an expansion of rights from the 2018 Bill. While the right to be forgotten only restricts or discontinues disclosure of personal data, the right to erasure goes a step ahead and empowers the data principal to demand complete removal of data from the system of the data fiduciary.
Many of the concerns expressed in the context of the Draft Bill 2018, in terms of the procedural conditions for the exercise of the rights of data principals, as well as the right to data portability specifically, continue to persist in the PDP Bill 2019.
Exceptions and Exemptions
While the PDP Bill ostensibly enables individuals to exercise their right to privacy against the State and the private sector, there are several exemptions available, which raise several concerns.
The Bill grants broad exceptions to the State. In some cases, it is in the context of specific obligations such as the requirement for individuals’ consent. In other cases, State action is almost entirely exempted from obligations under the law. Some of these exemptions from data protection obligations are available to the private sector as well, on grounds like journalistic purposes, research purposes and in the interests of innovation.
The most concerning of these provisions, are the exemptions granted to intelligence and law enforcement agencies under the Bill. The Draft Bill 2018, also provided exemptions to intelligence and law enforcement agencies, so far as the privacy invasive actions of these agencies were permitted under law, and met procedural standards, as well as legal standards of necessity and proportionality. We have previously discussed some of the concerns with this approach here.
The exemptions provided to these agencies under the PDP Bill, seem to exacerbate these issues.
Under the Bill, the Central Government can exempt an agency of the government from the application of this Act by passing an order with reasons recorded in writing if it is of the opinion that the exemption is necessary or expedient in the interest of sovereignty and integrity, security of the state, friendly relations with foreign states, public order; or for preventing incitement to the commission of any cognizable offence relating to the aforementioned grounds. Not only have the grounds on which government agencies can be exempted been worded in an expansive manner, the procedure of granting these exemptions also is bereft of any safeguards.
The executive functioning in India suffers from problems of opacity and unfettered discretion at times, which requires a robust system of checks and balances to avoid abuse. The Indian Telegraph Act, 1885 (Telegraph Act) and the Information Technology Act, 2000 (IT Act) enable government surveillance of communications made over telephones and the internet. For drawing comparison here, we primarily refer to the Telegraph Act as it allows the government to intercept phone calls on similar grounds as mentioned in clause 35 of the Bill by an order in writing. However, the Telegraph Act limits the use of this power to two scenarios – occurrence of a public emergency or in the interest of public safety. The government cannot intercept communications made over telephones in the absence of these two preconditions. The Supreme Court in People’s Union for Civil Liberties v. Union of India, (1997) introduced guidelines to check abuse of surveillance powers under the Telegraph Act which were later incorporated in Rule 419A of the Indian Telegraph Rules, 1951. A prominent safeguard included in Rule 419A requires that surveillance and monitoring orders be issued only after considering ‘other reasonable means’ for acquiring the required information. The court had further limited the scope of interpretation of ‘public emergency’ and ‘public safety’ to mean “the prevalence of a sudden condition or state of affairs affecting the people at large and calling for immediate action”, and “the state or condition of freedom from danger or risk at large” respectively. In spite of the introduction of these safeguards, the procedure of intercepting telephone communications under the Telegraph Act is criticised for lack of transparency and improper implementation. For instance, a 2014 report revealed that around 7500 – 9000 phone interception orders were issued by the Central Government every month. The application of procedural safeguards, in each case would have been physically impossible given the sheer numbers. Thus, legislative and judicial oversight becomes a necessity in such cases.
The constitutionality of India’s surveillance apparatus inclduing section 69 of the IT Act which allows for surveillance on broader grounds on the basis of necessity and expediency and not ‘public emergency’ and ‘public safety’, has been challenged before the Supreme Court and is currently pending. Clause 35 of the Bill also mentions necessity and expediency as prerequisites for the government to exercise its power to grant exemption, which appear to be vague and open-ended as they are not defined. The test of necessity, implies resorting to the least intrusive method of encroachment up on privacy to achieve the legitimate state aim. This test is typically one among several factors applied in deciding on whether a particular intrusion on a right is tenable or not, under human rights law. In his concurring opinion in Puttaswamy (I) J. Kaul had included ‘necessity’ in the proportionality test. (However, this test is not otherwise well developed in Indian jurisprudence). Expediency, on the other hand, is not a specific legal basis used for determining the validity of an intrusion on human rights. It has also not been referred to in Puttaswamy (I) as a basis of assessing a privacy violation. The use of the term ‘expediency’ in the Bill is deeply worrying as it seems to bring down the threshold for allowing surveillance which is a regressive step in the context of cases like PUCL and Puttaswamy (I). A valid law along with the principles of proportionality and necessity are essential to put in place an effective system of checks and balances on the powers of the executive to provide exemptions. It seems unlikely that the clause will pass the test of proportionality (sanction of law, legitimate aim, proportionate to the need of interference, and procedural guarantees against abuse) as laid down by the Supreme Court in Puttaswamy (I).
The Srikrishna Committee report had recommended that surveillance should not only be conducted under law (and not executive order), but also be subject to oversight, and transparency requirements. The Committee had argued that the tests of lawfulness, necessity and proportionality provided for under clauses 42 and 43 (of the Draft Bill 2018) were sufficient to meet the standards set out under the Puttaswamy judgment. Since the PDP Bill completely does away with all these safeguards and leaves the decision to executive discretion, the law is unconstitutional. After the Bill was introduced in the Lok Sabha, J. Srikrishna had criticised it for granting expansive exemptions in the absence of judicial oversight. He warned that the consequences could be disastrous from the point of view of safeguarding the right to privacy and could turn the country into an “Orwellian State”. He has also opined on the need for a separate legislation to govern the terms under which the government can resort to surveillance.
Clause 36 of the Bill deals with exemption of some provisions for certain processing of personal data. It combines four different clauses on exemption which were listed in the Draft Bill 2018 (clauses 43, 44, 46 and 47). These include processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law; for the purpose of legal proceedings; personal or domestic purposes; and journalistic purposes. The Draft Bill 2018 had detailed provisions on the need for a law passed by Parliament or the State Legislature which is necessary and proportionate, for processing of personal data in the interests of prevention, detection, investigation and prosecution of contraventions of law. Clause 36 of the Bill does not enumerate the need for a law to process personal data under these exemptions. We hadargued that these exemptions granted by the Draft Bill 2018 (clauses 43, 44, 46 and 47) were wide, vague and needed clarifications, but the exemptions under clause 36 of the Bill are even more ambiguous as they merely enlist the exemptions without any specificities or procedural safeguards in place.
In the Draft Bill 2018, the Authority could not give exemption from the obligation of fair and reasonable processing, measures of security safeguards and data protection impact assessment for research, archiving or statistical purposes As per the current Bill, the Authority can provide exemption from any of the provisions of the Act for research, archiving or statistical purposes.
The last addition to this chapter of exemptions is that of creating a sandbox for encouraging innovation. This newly added clause 40 is aimed at encouraging innovation in artificial intelligence, machine-learning or any other emerging technology in public interest. The details of what the sandbox entails other than exemption from some of the obligations of Chapter II might need further clarity. Additionally, to be considered an eligible applicant, a data fiduciary has to necessarily obtain certification of its privacy by design policy from the DPA, as mentioned in clause 40(4) read with clause 22.
Though well appreciated for its intent, this provision requires clarification on grounds of selection and details of what the sandbox might entail.
[1] At the time of introduction of the PDP Bill 2019, the Minister for Law and Justice of India, Mr. Ravi Shankar Prasad suggested that over 2000 inputs were received on the Draft Bill 2018, based on which changes have been made in the PDP Bill 2019. However, these comments and inputs have not been published by MeitY, and only a handful of comments have been published, by the stakeholders submitting these comments themselves.
Huawei finds support from Indian telcos in the 5G rollout as PayPal withdrew from Facebook’s Libra cryptocurrency project; Foreign Portfolio Investors moved MeitY against in the Data Protection Bill; the CJEU rules against Facebook in case relating to takedown of content globally; and Karnataka joins list of states considering implementing NRC to remove illegal immigrants – presenting this week’s most important developments in law, tech and national security.
Digital India
[Sep 30] Why the imminent global economic slowdown is a growth opportunity for Indian IT services firms, Tech Circle report.
[Sep 30] Norms tightened for IT items procurement for schools, The Hindu report.
[Oct 1] Govt runs full throttle towards AI, but tech giants want to upskill bureaucrats first, Analytics India Magazine report.
[Oct 3] – presenting this week’s most important developments in law, tech and national security. MeitY launches smart-board for effective monitoring of the key programmes, The Economic Times report.
[Oct 3] “Use human not artificial intelligence…” to keep a tab on illegal constructions: Court to Mumbai civic body, NDTV report.
[Oct 3] India took 3 big productivity leaps: Nilekani, Livemint report.
[Oct 4] MeitY to push for more sops to lure electronic makers, The Economic Times report; Inc42 report.
[Oct 4] Core philosophy of Digital India embedded in Gandhian values: Ravi Shankar Prasad, Financial Express report.
[Oct 4] How can India leverage its data footprint? Experts weigh in at the India Economic Summit, Quartz report.
[Oct 4] Indians think jobs would be easy to find despite automation: WEF, Tech Circle report.
[Oct 4] Telangana govt adopts new framework to use drones for last-mile delivery, The Economic Times report.
[Oct 5] Want to see ‘Assembled in India’ on an iPhone: Ravi Shankar Prasad, The Economic Times report.
[Oct 6] Home market gets attractive for India’s IT giants, The Economic Times report.
Internet Governance
[Oct 2] India
Govt requests maximum social media content takedowns in the world, Inc42 report;
Tech Circle report.
[Oct 3]
Facebook can be forced to delete defamatory content worldwide, top EU court
rules, Politico EU report.
[Oct 4] EU
ruling may spell trouble for Facebook in India, The Economic Times report.
[Oct 4] TikTok,
TikTok… the clock is ticking on the question whether ByteDance pays its content
creators, ET Tech report.
[Oct 6] Why
data localization triggers a heated debate, The Economic Times report.
[Oct 7]
Sensitive Indian govt data must be stored locally, Outlook report.
Data
Protection and Privacy
[Sep 30] FPIs
move MeitY against data bill, seek exemption, ET markets report,
Inc42 report;
Financial Express report.
[Oct 1] United
States: CCPA exception approved by California legislature, Mondaq.com report.
[Oct 1] Privacy
is gone, what we need is regulation, says Infosys Kris Gopalakrishnana, News18 report.
[Oct 1]
Europe’s top court says active consent is needed for tracking cookies, Tech
Crunch report.
[Oct 3] Turkey
fines Facebook $282,000 over data privacy breach, Deccan Herald report.
Free
Speech
[Oct 1]
Singapore’s ‘fake news’ law to come into force Wednesday, but rights group
worry it could stifle free speech, The Japan Times report.
[Oct 2]
Minister says Singapore’s fake news law is about ‘enabling’ free speech, CNBC report.
[Oct 3] Hong
Kong protests: Authorities to announce face mask ban, BBC News report.
[Oct 3] ECHR:
Holocaust denial is not protected free speech, ASIL brief.
[Oct 4] FIR
against Mani Ratnam, Adoor and 47 others who wrote to Modi on communal
violence, The News Minute report;
Times Now report.
[Oct 5] UN asks
Malaysia to repeal laws curbing freedom of speech, The New Indian Express report.
[Oct 6] When
will our varsities get freedom of expression: PC, Deccan Herald report.
[Oct 6] UK
Government to make university students sign contracts limiting speech and
behavior, The Times report.
[Oct 7] FIR on
Adoor and others condemned, The Telegraph report.
Aadhaar,
Digital IDs
[Sep 30] Plea
in SC seeking linking of social media accounts with Aadhaar to check fake news,
The Economic Times report.
[Oct 1] Why
another omnibus national ID card?, The Hindu Business Line report.
[Oct 2] ‘Kenyan
court process better than SC’s approach to Aadhaar challenge’: V Anand, who
testified against biometric project, LiveLaw report.
[Oct 3] Why
Aadhaar is a stumbling block in Modi govt’s flagship maternity scheme, The
Print report.
[Oct 4]
Parliament panel to review Aadhaar authority functioning, data security, NDTV report.
[Oct 6] Roger Marshall, Deccan Herald, Big oil, Big Data and the shape of water.
[Oct 6] Neil Chatterjee, Fortune, The power grid is evolving. Cybersecurity must too.
[Oct 7] Scott W Pink, Modaq.com, EU: What is GDPR and CCPA and how does it impact blockchain?
[Oct 7] GN Devy, The Telegraph, Has India slid into an irreversible Talibanization of the mind?
[Oct 7] Susan Ariel Aaronson, South China Morning Post, The Trump administration’s approach to AI is not that smart: it’s about cooperation, not domination.
The deadline to link PAN cards with Aadhaar was extended to December 31 this week; the Election Commission ruled that voting rights of those excluded in the NRC process remain unaffected; the Home Minister proposed a digital census with multipurpose ID cards for 2021; and 27 nations including the US, UK and Canada issued joint statement urging for a rules-based order in cyberspace – presenting this week’s most important developments in law, technology and national security.
Aadhaar and Digital IDs
[Sep 23] Home Minister announces
digital census in 2021, proposed multipurpose ID card, Entrackr report; Business Today report.
[Sep 24] NRIs can now apply for
Aadhaar on arrival without 182-day wait, The Economic Times report.
[Sep 24] Aadhaar will be linked
to driving license to avoid forgery: Ravi Shankar Prasad, The Indian Express report.
[Sep 24] One nation, one card?
Amit Shah floats idea of all-in-one ID; here are all the problems with that
idea, Medianama report; Money Control report.
[Sep 24] Explained: Is India
likely to have a multipurpose national ID card? The Indian Express report.
[Sep 24] UIDAI nod to ‘voluntary’
use of Aadhaar for National Population Register rollout, The Economic Times report.
[Sep 24] Govt must decide on
Aadhaar-social media linkage:SC, Deccan Herald report.
[Sep 25] New law needed for
Aadhaar-social media linkage: UIDAI, The Economic Times report; Inc42 report.
[Sep 26] NPR process to include
passport, voter ID, Aadhaar and other details, Business Standard report.
[Sep 27] Gang involved in making
fake Aadhaar cards busted, The Tribune report.
[Sep 27] What will happen if you
don’t link your PAN card with Aadhaar by Sep 20, The Quint report.
[Sep 27] Explained: The National
Population Register, and the controversy around it, The Indian Express report.
[Sep 27] Aadhaar to weed out
bogus social security beneficiaries in Karnataka, Deccan Herald report.
[Sep 29] Bajrang Dal wants
Aadhaar mandatory at dandiya to keep ‘non-Hindus’ out, The Hindustan Times report; The Wire report.
[Sep 30] Kerala urges Centre to
extend deadline to link ration cards with Aadhaar, The News Minute report.
[Sep 30] PAN-Aadhaar linking
deadline extended to December 31, The Economic Times report.
Digital India
[Sep 25] India’s regulatory
approach should focus on the regulation of the ‘core’: IAMAI, Livemint report.
[Sep 27] India may have to offer
sops to boost electronic manufacturing, ET Tech report; Inc42 report.
[Sep 27] Digital India, start-ups
are priorities for $5 trillion economy: PM Modi, Medianama report.
[Sep 29] Tech giants aim to skill
Indian govt officials in AI, cloud, ET CIO report.
[Sep 29] India’s share in IT,
R&D biz up in 2 years: report, The Economic Times report.
Internet Governance
[Sep 24] Supreme Court to MeitY:
What’s the status of intermediary guidelines? Tell us by Oct 15, Medianama report.
[Sep 26] Will not be ‘excessive’
with social media rules, ay Govt officials, Inc42 report.
[Sep 26] Government trying to
balance privacy and security in draft IT intermediary norms, The Economic Times
report.
[Sep 27] Citizens, tech companies
served better with some regulation: Facebook India MD Ajit Mohan, ET Tech report; Inc42 report.
[Sep 27] Balance benefits of
internet, data security: Google CEO Sundar Pichai, ET Tech report; Business Today report.
Free Speech
[Sep 25] Jadavpur University
calls upon ‘stakeholders’ to ensure free speech on campus, The New Indian
Express report.
[Sep 28] RSS raises objections to
uncensored content of Maoj Bajpayee’s “The Family Man”, The Hindu report; Outlook report.
Privacy and Data Protection
[Sep 23] A landmark decision on
Tuesday could radically reshape how Google’s search results work, Business
Insider report.
[Sep 23] Google tightens its
voice assistant rules amidst privacy backlash, Wired report.
[Sep 24] Dell rolls out new data
protection storage appliances and capabilities, ZDNet report.
[Sep 24] ‘Right to be forgotten’
privacy rule is limited by Europe’s top court, The New York Times report; Live Law report.
[Sep 27] Nigeria launches
investigation into Truecaller for potential breach of privacy, Medianama report.
[Sep 29] Right to be forgotten
will be arduous as India frames data protection law, Business Standard report.
[Sep 30] FPIs move against data
bill, seek exemption, ET Telecom report; Entrackr report.
Data Localisation
[Sep 26] Reconsider imposition of
data localisation: IAMAI report, The Economic Times report.
[Sep 27] Why data is not oil:
Here’s how India’s data localisation norms will hurt the economy, Inc42 report.
Digital Payments and Fintech
[Sep 23] RBI rider on credit
bureau data access has Fintech in a quandary, ET Tech report.
[Sep 23] The face of India’s
crypto lobby readies for a clash, Ozy report.
[Sep 23] Why has Brazil’s Central
Bank included crypto assets in trade balance? Coin Telegraph report.
[Sep 24] French retailers
widening crypto acceptance, Tech Xplore report.
[Sep 26] Why crypto hoaxes are so
successful, Quartz report.
[Sep 26] South Africa: the net
frontier for crypto exchanges, Coin Telegraph report.
[Sep 27] The crypto wars’ strange
bedfellows, Forbes report.
[Sep 28] Crypto industry is already
preparing for Google’s ‘quantum supremacy’, Decrypt report.
[Sep 29] How crypto gambling is
regulated around the world, Coin Telegraph report.
Tech and Law Enforcement
[Sep 29] New WhatsApp and
Facebook Encryption ‘Backdoors’ – What’s really going on, Forbes report.
[Sep 28] Facebook, WhatsApp will
have to share messages with UK Government, Bloomberg report.
[Sep 23] Secret FBI subpoenas
scoop up personal data from scores of companies, The New York Times report.
[Sep 23] ‘Don’t transfer the
WhatsApp traceability case’, Internet Freedom Foundation asks Supreme Court,
Medianama report.
[Sep 24] China offers free subway
rides to citizens who register their face with surveillance system, The
Independent report.
[Sep 24] Facial recognition
technology in public housing prompts backlash, The New York Times report.
[Sep 24] Facebook-Aadhaar linkage
and WhatsApp traceability: Supreme Court says government must frame rules, CNBC
TV18 report.
[ep 27] Fashion that counters
surveillance cameras, Business Times report.
[Sep 27] Unnao rape case: Delhi
court directs Apple to give Sengar’s location details on day of alleged rape,
Medianama report.
[Sep 27] Face masks to decoy
t-shirts: the rise of anti-surveillance fashion, Times of India report.
[Sep 30] Battle for privacy and
encryption: WhatsApp and government head for a showdown on access to messages,
ET Prime report.
[Sep 29] Improving digital
evidence sharing, Scottish Government news report; Public technology report.
Internal Security: J&K
[Sep 23] Government launches
internet facilitation centre in Pulwama for students, Times of India report; Business Standard report.
[Sep 23] Army chief rejects ‘clampdown’
in Jammu and Kashmir, Times of India report.
[Sep 24] Rising power: Why India
has faced muted criticism over its Kashmir policy, Business Standard report.
[Sep 24] ‘Restore Article 370,
35A in Jammu and Kashmir, withdraw army, paramilitary forces’: 5-member women’s
group will submit demands to Amit Shah, Firstpost report.
[Sep 24] No normalcy in Kashmir,
says fact finding team, The Hindu report.
[Sep 25] End clampdown: Kashmir
media, The Telegraph report.
[Sep 25] Resolve Kashmir issue
through dialogue and not through collision: Erdogan, The Economic Times report.
[Sep 25] Rajya Sabha deputy chair
thwarts Pakistan’s attempt at Kashmir at Eurasian Conference, The Economic
Times report.
[Sep 25] Pakistan leader will
urge UN intervention in Kashmir, The New York Times report.
[Sep 25] NSA Ajit Doval back in
Srinagar to review security situation, The Hindustan Times report.
[Sep 27] Communication curbs add
fresh challenge to Kashmir counter-insurgency operations, News18 report.
[Sep 27] Fresh restrictions in
parts of Kashmir, The Hindu report.
[Sep 27] US wants ‘rapid’ easing
of Kashmir restrictions, Times of India report.
[Sep 27] Kashmir issue: Rescind
action on Art. 370, OIC tells India, The Hindu report.
[Sep 28] India objects to China’s
reference to J&K and Ladakh at UNGA, The Economic Times report; The Hindu report.
[Sep 29] Surveillance, area
domination operations intensified in Kashmir, The Economic Times report; Financial Express report.
[Sep 29] Police impose
restrictions in J&K after Imran Khan’s speech at UNGA, India Today report.
Internal Security: NRC and the
North-East
[Sep 23] Assam framing cyber
security policy to secure data related to NRC, police, services, The Economic
Times report; Money Control report.
[Sep 24] BJP will tell SC that we
reject this NRC, says Himanta Biswa Sarma, Business Standard report.
[Sep 24] Amit Shah to speak on
NRC, Citizenship Amendment Bill in Kolkata on Oct 1, The Economic Times report.
[Sep 26] ‘Expensive’ legal battle
for those rejected in Assam NRC final list, The Economic Times report.
[Sep 27] Scared of NRC? Come back
in 2022, The Telegraph report.
[Sep 27] Voters left out of NRC
will have right to vote, rules Election Commission, India Today report; The Wire report.
[Sep 27] NRC: Assam government
announces 200 Foreigners Tribunals in 33 districts, Times Now report; Times of India report.
[Sep 28] Judge urges new FT
members to examine NRC claims with utmost care, Times of India report.
National Security Legislation
[Sep 23] Centre will reintroduce
Citizenship Bill in Parliament: Himanta Biswa Sarma, The Hindu report.
[Sep 26] National Security Guard:
History, Functions and Operations, Jagran Josh report.
[Sep 28] Left parties seek
revocation of decision on Article 370, The Tribune India report.
Tech and National Security
[Sep 25] Army to start using
Artificial Intelligence in 2-3 years: South Western Army commander, The Print report; India Today report; The New Indian Express report; Financial Express report.
[Sep 23] Modi, Trump set new
course on terrorism, border security, The Hindu report.
[Sep 23] PM Modi in the US” Trump
promises more defence deals with India, military trade to go up, Financial
Express report.
[Sep 23] Punjab police bust
terror module supplied with weapons by drones from Pak, NDTV report.
[Sep 26] Lockheed Martin to begin
supplying F-16 wings from Hyderabad plant in 2020, Livemint report.
[Sep 26] Drones used for
cross-border arms infiltration in Punjab a national security issues, says
Randhawa, The Hindu report.
[Sep 27] UK MoD sets up cyber
team for secure innovation, UK Authority report.
[Sep 29] New tri-services special
ops division, meant for surgical strikes, finishes first exercise today, The
Print report.
[Sep 30] After Saudi attacks,
India developing anti-drone technology to counter drone menace, Eurasian Times report.
Tech and Elections
[Sep 20] Microsoft will offer
free Windows 7 support for US election officials through 2020, Cyber Scoop report.
[Sep 26] Social media platforms
to follow ‘code of ethics’ in all future elections: EC, The Economic Times report.
[Sep 28] Why is EC not making
‘authentic’ 2019 Lok Sabha results public? The Quint report.
Cybersecurity
[Sep 24] Androids and iPhones
hacked with just one WhatsApp click – and Tibetans are under attack, Forbes report.
[Sep 25] Sharp questions can help
board oversee cybersecurity, The Wall Street Journal report.
[Sep 25] What we know about
CrowdStrike, the cybersecurity firm trump mentioned in Ukraine call, and its
billionaire CEO, Forbes report.
[Sep 25] 36% smaller firms
witnessed data breaches in 2019 globally, ET Rise report.
[Sep 28] Defence Construction
Canada hit by cyber attack – corporation’s team trying to restore full IT
capability, Ottawa Citizen report.
[Sep 29] Experts call for
collective efforts to counter cyber threats, The New Indian Express report.
[Sep 29] Microsoft spots malware
that turns PCs into zombie proxies, ET Telecom report.
[Sep 29] US steps up scrutiny of
airplane cybersecurity, The Wall Street Journal report.
Cyberwarfare
[Sep 24] 27 countries sign cybersecurity pledge urging rules-based control over cyberspace in Joint Statement, with digs at China and Russia, CNN report; IT world Canada report; Meri Talk report.
[Sep 26] Cyber Peace Institute fills a critical need for cyber attack victims, Microsoft blog.
[Sep 29] Britain is ‘at war every day’ due to constant cyber attacks, Chief of the Defence Staff says, The Telegraph report.
Telecom and 5G
[Sep 27] Telcos’ IT investments
intact, auto companies may slow pace: IBM exec, ET Tech report.
[Sep 29] Telecom players to lead
digital transformation in India, BW Businessworld report.
More on Huawei
[Sep 22] Huawei confirms another
nasty surprise for Mate 30 buyers, Forbes report.
[Sep 23] We’re on the same page
with government on security: Huawei, The Economic Times report.
[Sep 24] The debate around 5G’s
safety is getting in the way of science, Quartz report (paywall).
[Sep 24] Govt will take call on
Huawei with national interest in mind: Telecom Secy, Business Standard report.
[Sep 24] Huawei enables 5G smart
travel system at Beijing airport, Tech Radar report.
[Sep 25] Huawei 5G backdoor entry
unproven, The Economic Times report.
[Sep 25] US prepares $1 bn fund
to replace Huawei ban kit, Tech Radar report.
[Sep 26] Google releases large
dataset of deepfakes for researchers, Medianama report.
[Sep 26] Huawei willing to
license 5G technology to a US firm, The Hindu Business Line report; Business Standard report.
[Sep 26] Southeast Asia’s top
phone carrier still open to Huawei 5G, Bloomberg report.
[Sep 29] Russia rolls out the red
carpet for Huawei over 5G, The Economic Times report.
Emerging Tech and AI
[Sep 20] Google researchers have
reportedly achieved “Quantum Supremacy”, Financial Times report; MIT Technology Review report.
[Sep 23] Artificial Intelligence
revolution in healthcare in India: All we need to know, The Hindustan Times report.
[Sep 23] A new joystick for the
brain-controlled vehicles of the future, Defense One report.
[Sep 24] Computing and AI:
Humanistic Perspectives from MIT, MIT News report.
[Sep 24] Emerging technologies
such as AI, 5G posing threats to privacy, says report, China Daily report.
[Sep 25] Alibaba unveils chip
developed for artificial intelligence era, Financial Times report.
[Sep 26] Pentagon wants AI to
interpret ‘strategic activity around the globe, Defense One report.
[Sep 27] Only 10 jobs created for
every 100 jobs taken away by AI, ET Tech report.
[Sep 27] Experts say these
emerging technologies should concern us, Business Insider report.
[Sep 27] What is on the horizon
for export controls on ‘emerging technologies’? Industry comments may hold a
clue, Modaq.com report.
[Sep 27] India can become world
leader in artificial intelligence: Vishal Sikka, Money Control report.
[Sep 27] Elon Musk issues a
terrifying prediction of ‘AI robot swarms’ and huge threat to mankind, The
Daily Express (UK) report.
[Sep 27] Russia’s national AI
Centre is taking shape, Defense One report.
[Sep 29] Explained: What is
‘quantum supremacy’, The Hindu report.
[Sep 29] Why are scientists so
excited about a new quantum computing milestone?, Scroll.in report.
[Sep 29] Artificial Intelligence
has a gender bias problem – just ask Siri, The Wire report.
[Sep 29] How AI is changing the
landscape of digital marketing, Inc42 report.
Opinions and Analyses
[Sep 21] Wim Zijnenburg, Defense One, Time to Harden International Norms on Armed Drones.
[Sep 23] David Sanger and Julian
Barnes, The New York Times, The urgent search for a cyber silver bullet against Iran.
[Sep 23] Neven Ahmad, PRIO Blog, The EU’s response to the drone age: A united sky.
[Sep 23] Bisajit Dhar and KS
Chalapati Rao, The Wire, Why an India-US Free Trade Agreement would require New Delhi to
reorient key policies.
[Sep 23] Filip Cotfas, Money Control, Five reasons why data loss prevention has to be taken seriously.
Cybersecurity experts warned of a new ‘SIM jacking’ threat, the Kerala High Court recognizes a right to access internet as the internet shutdown in Kashmir entered its 50th day; more updates on the linkage of Aadhaar with voter IDs and social media as the Indian Army braces itself to adopt AI – presenting this week’s most important developments in law, tech and national security.
Aadhaar
[Sep 16] Here are the amendments
the Election Commission wants to the Representation of the People Act for
Aadhaar-Voter ID linkage, Medianama report.
[Sep 18] Why Maj. Gen. Vombatkere
has challenged Aadhaar Amendment Act in the Supreme Court; On WhatsApp and
traceability, Medianama report.
[Sep 19] Drop in Aadhaar
enrolments in J&K, The Economic Times report.
[Sep 20] In-principle decision to
link Aadhaar with GST registration, The Economic Times report.
[Sep 23] Aadhaar card is now
mandatory for nominees of your EPF account, Livemint report.
Digital India
[Sep 18] Indo-US ICT working group to meet on Sept 30, Oct 1, Medianama report.
[Sep 17] NITI Aayog frames guidelines for automated inspection of vehicles, ET Auto report.
[Sep 17] What TikTok told MEITY about its intermediary status, data collection, and policies for children, Medianama report.
[Sep 18] Soon, lands will have Aadhaar-like unique numbers, The Economic Times report; Business Today report.
[Sep 18] Drones to be used to digitally map India: report, Medianama report.
[Sep 18] PMO panel to release policy to boost handset manufacturing in India: report, Medianama report.
[Sep 19] Karnataka to set up exclusive body to boost innovation, The Hindu report.
[Sep 20] ‘Right To Access Internet Is Part Of Right To Privacy And Right To Education’: Kerala HC, Live Law report; Hindu report; NDTV report.
Data Protection and Privacy
[Sep 15] Privacy debate between govt, Facebook continues; no winner yet, Money Control report.
[Sep 16] Singapore, Philippines sign MoU on personal data protection, The Manila Times report.
[Sep 16] Industry wants careful drafting of regulations on non-personal data, The Economic Times report.
[Sep 16] Here are the top three reasons why data protection is required in every business, Firstpost report.
[Sep 20] Sensitive, super-sensitive data must be stored locally in india: RS PRasad, Business Standard report.
[Sep 20] Yet another data leak in Indian government database, exoposes multiple citizen IDs, Inc42 report.
[Sep 22] Infosys co-founder Kris Gopalakrishnan to lead panel on protection of non-personal data, Financial Express report.
E-Commerce
[Sep 16] Odisha government makes
e-marketplace mandatory for procurements, The New Indian Express report.
[Sep 17] Ministry of COnsumer
Affairs extends deadline for comments on draft E-Commerce Guidelines 2019 to
October 31, Medianama report.
FinTech and Digital Payments
[Sep 16] WhatsApp to roll out its
payment services by end of this year: report, Medianama report; The
Economic Times report.
[Sep 18] RBI proposes norms to
regulate payment gateways and payment aggregators, Entrackr report.
[Sep 19] Regulatory shock for
fintech firms: RBI blocks unregulated access to consumer credit history,
Entrackr report.
[Sep 19] DSCI, MeitY and Google
India join hands for ‘Digital Payment Abhiyan’, The Economic Times report.
Cryptocurrencies
[Sep 16] The toss of a Bitcoin:
How crypto ban will hurt 5 mn Indians, 20k Blockchain developers, The Economic
Times report.
[Sep 16] US Sanctions three
alleged crypto hacking groups from North Korea, Coin Desk report.
[Sep 16] Crypto firms assess how
to comply with anti-money laundering standards, The Wall Street Journal report.
[Sep 19] Bitcoin and crypto
wallets are now being targeted by malware, Forbes report.
[Sep 21] Weekends are for
Altcoins when it comes to crypto market gains, ET Markets report.
[Sep 21] Chinese officials
surprisingly chill on crypto, Decrypt report.
Cybersecurity
[Sep 13] Ransomware has a new
target, Defense One report.
[Sep 16] Deep learning and
machine learning to transform cybersecurity, Tech Wire Asia report.
[Sep 16] America needs a
whole-of-society approach to cybersecurity. ‘Grand Challenges’ can help,
Defense One report.
[Sep 17] Financial asset firm PCI
ordered to pay $1.5 million for poor cybersecurity practices, ZD Net report.
[Sep 20] Current Act outdated,
need to include cyber security in IT legal framework: DCA chief, The Indian
Express report.
[Sep 20] 10% of IT budget should
be used for cybersecurity: Rear Admiral Mohit Gupta, ET Times report.
[Sep 20] Once hacked, twice shy:
How auto supplier Harman learned to fight cyber car jackers, ET Auto report.
[Sep 21] Cybersecurity a big
opportunity for telcos, says IBM executive, The Economic Times report.
[Sep 23] Cybersecurity experts
raise alarm over new SIM jacking threat, The New Indian Express report.
[Sep 23] Cybersecurity: Tackling
the menace of phishing, Financial Express report.
Tech and Law Enforcement; Surveillance
[Sep 15] Facebook moots
‘prospective’ solution to WhatsApp issue; India stands firm on traceability,
Business Today report; Livemint report.
[Sep 18] Chinese firms are
driving the rise of AI surveillance across Africa, Quartz report.
[Sep 18] Documents reveal how
Russia taps phone companies for surveillance, Tech Crunch report.
[Sep 20] WhatsApp traceability
case petitioner asks court to remove Aadhaar from the plea, consider only ‘authorised
govt proofs’, Medianama report; Inc42 report; Bar & Bench report.
[Sep 20] Chennai-based KPost says
traceability is possible, wants to be impleaded in WhatsApp case, Medianama report.
Tech and National Security
[Sep 13] Pentagon’s former top
hacker wants to inject some Silicon Valley into the defense industry, Defense
One report.
[Sep 16] Here’s how startups are
helping the Defence Ministry up its game, Money Control report.
[Sep 16] After 6 years in exile,
Edward Snowden explains himself, Wired report.
[Sep 17] US tells Saudi Arabia
oil attacks were launched from Iran, The Wall Street Journal report.
[Sep 17] Why Rafale jets may be
inducted into IAF by next summer only, Livemint report.
[Sep 17] US Air Force to shift
billions of dollars to network its weapons, Defense One report.
[Sep 18] India to achieve US$26
billion defence industry by 2025: Defence Minister, Business Standard report.
[Sep 18] Mitigating security
risks from emerging technologies, Army Technology analysis.
[Sep 18] Revised draft defence
procurement norms to be ready by November end, The Hindu report.
[Sep 20] The NSA is running a
satellite hacking experiment, Defense One report.
[Sep 20] Army to host seminar on
artificial intelligence next week; seeks to enhance lethality, The Economic
Times report; India Today report; The New Indian Express report.
[Sep 20] Defence Procurement: Not
a level playing field for private sector, PSUs still rule, Bharat Shakti report.
[Sep 20] Indian Air Force
‘accepts’ Rafale, formal hand over on Dussehra, Livemint report.
[Sep 22] Amid US-India blooming
ties, Washington prepares to take down Indian air defence systems, EurAsian
Times report.
[Sep 23] Government likely to
order 36 more Rafale fighter jets, The Economic Times report.
Tech and Elections
[Sep 20] Social media companies
raise concerns over Election Commission’s voluntary code of ethics, Medianama report.
Internal Security: J&K
[Sep 16] Supreme Court says
normalcy to return to Kashmir but with national security in mind, India Today report.
[Sep 16] Farooq Abdullah booked
under Public Safety Act, committee to decide duration of arrest: report,
Financial Express report.
[Sep 17] Amnesty’s report on the
(mis)use of Public Safety Act in J&K counters the govt’s narrative, Youth
ki Awaaz report.
[Sep 18] China says Kashmir issue
may not be a ‘major topic’ during Modi-Xi meet, Livemint report.
[Sep 19] In Pakistan-held
Kashmir, growing calls for independence, The New York Times report.
[Sep 20] Kashmir residents say
they are being charged by telcos despite no service, The Hindu report.
[Sep 20] UN Chief could discuss
Kashmir issues at UNGA: UN spokesman, The Economic Times report.
[Sep 20] How military drones are
becoming deadly weapons across the globe, The Economic Times report.
[Sep 22] Modi’s Digital India
comes crashing down in Kashmir’s longest ever internet gag, The Wire report; The Hindu report.
[Sep 23] No clampdown in Kashmir,
only communication line of terrorists stopped: Army Chief Bipin Rawat, India
Today report.
Internal Security: NRC
[Sep 16] Those declared foreigners
cannot file NRC appeal, say Assam govt, Hindustan Times report.
This week, Delhi International Airport deployed facial recognition on a ‘trial basis’ for 3 months, landline communications were restored in Kashmir as the Government mulls over certification for online video streaming platforms like Netflix and PrimeVideo – presenting this week’s most important developments in law, tech and national security.
Aadhaar
[Sep 3] PAN will be issued
automatically using Aadhaar for filing returns: CBDT, DD News report.
[Sep 3] BJD set to collect Aadhaar
numbers of its members in Odisha, Opposition parties slam move, News 18 report; The New Indian Express report; Financial Express report.
[Sep 5] Aadhaar is secure, says
ex-UIDAI chief, Times of India report.
[Sep 5] Passport-like Aadhaar centre
opened in Chennai: Online appointment booking starts, Livemint report.
[Sep 8] Plans to link Janani Suraksha
and Matra Vandan schemes with Aadhaar: CM Yogi Adityanath, Times of India report.
Digital India
[Sep 5] Digital media bodies welcome
26% FDI cap, Times of India report.
[Sep 6] Automation ‘not threat’
to India’s IT industry, ET Tech report.
[Sep 6] Tech Mahindra to modernise
AT&T network systems, Tech Circle report.
Data Protection and Governance
[Sep 2] Health data comes under the
purview of Data Protection Bill: IAMAI, Inc42 report.
[Sep 2] Credit history should not be
viewed as sensitive data, say online lenders, Livemint report.
[Sep 3] MeitY may come up with policy
on regulation of non-personal data, Medianama report.
[Sep 3] MeitY to work on a white paper
to gain clarity on public data regulations, Inc42 report.
[Sep 6] Treating data as commons is
more beneficial, says UN report, Medianama report.
[Sep 9] Indian Government may allow
companies to sell non-personal data of its users, Inc42 report, The Economic Times report.
[Sep 9] Tech firms may be compelled to
share public data of its users, ET Tech report.
Data Privacy and Breaches
[Sep 2] Chinese face-swap app Zao faces
backlash over user data protection, KrAsia report; Medianama report.
[Sep 2] Study finds Big Data eliminates
confidentiality in court judgments, Swiss Info report.
[Sep 4] YouTube will pay $170 million
to settle claims it violated child privacy laws, CNBC report; FTC Press Release.
[Sep 4] Facebook will now let people
opt-out of its face recognition feature, Medianama report.
[Sep 4] Mental health websites in
Europe found sharing user data for ads, Tech Crunch report.
[Sep 5] A huge database of Facebook
users’ phone numbers found online, Tech Crunch report.
[Sep 5] Twitter has temporarily
disabled tweet to SMS feature, Medianama report.
[Sep 6] Fake apps a trap to track your
device and crucial data, ET Tech report.
[Sep 6] 419 million Facebook users
phone numbers leaked online, ET Tech report; Medianama report.
[Sep 9] Community social media
platform, LocalCircles, highlights data misuse worries, The Economic Times report.
Free Speech
[Sep 7] Freedom of expression is not
absolute: PCI Chairman, The Hindu report.
[Sep 7] Chennai: Another IAS officer
resign over ‘freedom of expression’, Deccan Chronicle report.
[Sep 8] Justice Deepak Gupta: Law on
sedition needs to be toned down if not abolished, The Wire report.
Online Content Regulation
[Sep 3] Government plans certification
for Netflix, Amazon Prime, Other OTT Platforms, Inc42 report.
[Sep 4] Why Justice for Rights went to
court, asking for online content to be regulated, Medianama report.
[Sep 4] Youtube claims new hate speech
policy working, removals up 5x, Medianama report.
[Sep 6] MeitY may relax norms on
content monitoring for social media firms, ET Tech report; Inc42 report; Entrackr report.
E-Commerce
[Sep 4] Offline retailers accuse Amazon
and Flipkart of deep discounting, predatory pricing and undercutting, Medianama
report; Entrackr report.
[Sep 6] Companies rely on digital
certification startups to foolproof customer identity, ET Tech report.
Digital Payments and FinTech
[Sep 3] A sweeping reset is in the
works to bring India in line with fintech’s rise, The Economic Times report.
[Sep 3] Insurance and lending companies
in agro sector should use drones to reduce credit an insurance risks: DEA’s
report on fintech, Medianama report.
[Sep 4] Firefox will not block
third-party tracking and cryptomining by default for all users, Medianama report.
[Sep 4] Insurance companies are fueling
ransomware attacks, Defense One report.
[Sep 5] Firms facing shortage of
skilled workforce in cybersecurity: Infosys Research, The Economic Times report.
[Sep 5] Cybersecurity a boardroom
imperative in almost 50% of global firms: Survey, Outlook report; ANI report.
[Sep 5] DoD unveils new cybersecurity
certification model for contractors, Federal News Network report.
[Sep 5] Jigsaw Academy launches
cybersecurity certification programme in India, DQ India report.
[Sep 6] Indians lead the world as
Facebook Big Bug Hunters, ET Tech report.
[Sep 6] Australia is getting a new
cybersecurity strategy, ZD Net report.
[Sep 9] China’s 5G, industrial internet
roll-outs to fuel more demand for cybersecurity, South China Morning Post report.
Tech and National Security
[Sep 3] Apache copters to be inducted
today, The Pioneer report.
[Sep 3] How AI will predict Chinese and
Russian moves in the Pacific, Defense One report.
[Sep 3] US testing autonomous
border-patrol drones, Defense One report.
[Sep 3] Meet the coalition pushing for
‘Cyber Peace’ rules. Defense One report.
[Sep 4] US wargames to try out concepts
for fighting China, Russia, defense One report.
[Sep 4] Southern Command hosts seminar
on security challenges, Times of India report; The Indian Express report.
[Sep 4] Russia, already India’s biggest
arms supplier, in line for more, Business Standard report.
[Sep 4] Pentagon, NSA prepare to train
AI-powered cyber defenses, Defense One report.
[Sep 5] Cabinet clears procurement of Akash
missile system at Rs. 5500 crore, Times Now report.
[Sep 5] India to go ahead with $3.1
billion US del for maritime patrol aircraft, The Economic Times report.
[Sep 5] DGCA certifies ‘small’ category
drone for complying with ‘No-Permission, No-Takeoff’ protocol, Medianama report.
[Sep 5] India has never been aggressor
but will not hesitate in using its strength to defend itseld: Rajnath Singh,
The Economic Times report.
[Sep 5] Panel reviewing procurement
policy framework to come out with new versions of DPP, DPM by March 2020, The
Economic Times report; Business Standard report; Deccan Herald report.
[Sep 5] Russia proposes joint
development of submarines with India, The Hindu report.
[Sep 7] Proud of you: India tells ISRO
after contact lost with CHandrayaan-2 lander, India Today report.
Tech and Elections
[Sep 4] ECI asks social media firms to
follow voluntary code of ethics ahead of state polls: report, Medianama report.
[Sep 6] Congress party to reorganise its
data analytics department, Medianama report.
[Sep 5] Why the 2020 campaigns are
still soft targets for hackers, Defense One report.
[Sep 5] Facebook meets with FBI to
discuss election security, Bloomberg report.
[Sep 5] Facebook is making its own AI
deepfakes to head off a disinformation disaster, MIT Tech Review report.
Internal Security: J&K
[Sep 4] Long convoy, intel failure:
Multiple lapses led to Pulwama terror attack, finds CRPF inquiry, India Today report; Kashmir Media Service report; The Wire report.
[Sep 4] Extension of President’s Rule
in Kashmir was not delayed, MHA says in report to SC lawyer’s article,
Scroll.in report.
[Sep 6] Landline communication restored
in Kashmir Valley: Report, Medianama report.
[Sep 7] Kashmir’s Shia areas face
curbs, all Muharram processions banned, The Quint report.
[Sep 7] No question of army atrocities
in Kashmir as it’s only fighting terrorists: NSA Ajit Doval, India Today report.
[Sep 8] More than 200 militants trying
to cross into Kashmir from Pakistan: Ajit Doval, Money Control report.
[Sep 8] ‘Such unilateral actions are
futile’, says India after Pakistan blocks airspace for President Kovind,
Scroll.in report; NDTV report.
Internal Security: NRC
[Sep 2] Contradictory voices in Assam
Congress son NRC: Tarun Gogoi slams it as waste paper, party MP says historic
document, India Today report.
[Sep 3] Why Amit Shah is silent on NRC,
India Today report.
[Sep 7] AFSPA extended for 6 months in
Assam, Deccan Herald report.
[Sep 7] At RSS mega meet, concerns over
Hindus being left out of NRC: Sources, Financial Express report.
National Security Institutions
and Legislation
[Sep 5] Azhar, Saeed, Dawood declared
terrorists under UAPA law, Deccan Herald report; The Economic Times report.
[Sep 8] Home Minister says India’s
national security apparatus more robust than ever, Livemint report.
[Sep 8] Financial safety not national
security reason for women to join BSF: Study, India Today report.
Telecom/5G
[Sep 6] Security is an issue in 5G:
NCSC Pant on Huawei, Times of India report.
More on Huawei
[Sep 1] Huawei believes banning it from
5G will make countries insecure, ZD Net report.
[Sep 2] Huawei upbeat on AI strategy
for India, no word on 5G roll-out plans yet, Business Standard report.
[Sep 3] Huawei denies US allegations of
technology theft, NDTV Gadgets 260 report; Business Insider report; The Economic Times report.
[Sep 3] Shocking Huawei ‘Extortion and
Cyberattack’ allegations in new US legal fight, Forbes report; Livemint report, BBC News report; The Verge report.
[Sep 3] Committed to providing the most
advanced products: Huawei, ET Telecom report.
[Sep 4] Huawei says 5G rollout in India
will be delayed by 3 years if it’s banned, Livemint report
[Sep 4] Trump not interested in talking
Huawei with China, Tech Circle report.
[Sep 5] Nepal’s only billionaire
enlists Huawei to transform country’s elections, Financial Times report.
[Sep 8] Trump gets shocking new Huawei
warning – from Microsoft, Forbes report.
Emerging Tech
[Aug 30] Facebook is building an AI
Assistant Inside Minecraft, Forbes report.
[Sep 3] AWS partners with IIT KGP for
much needed push to India’s AI skilling, Inc42 report.
[Sep 3] Behind the Rise of China’s
facial recognition giants, Wired report.
[Sep 4] Facebook won’t use facial
recognition on you unless you tell it to, Quartz report.
[Sep 4] An AI app that turns you into a
movie star has risked the privacy of millions, MIT Technology Review report.
[Sep 6] Police use f facial recognition
is accepted by British Court, The New York Times report.
[Sep 6] Facebook, Microsoft announce
challenge to detect deepfakes, Medianama report.
[Sep 6] Facial recognition tech to
debut at Delhi airport’s T3 terminal; on ‘trial basis’ for next three months,
Medianama report.
Internet Shutdowns
[Sep 3] After more than 10 weeks,
internet services in towns of Rakhine and Chin restored, Medianama report.
[Sep 4] Bangladesh bans mobile phone
services in Rohingya camps, Medianama report.
Opinions and Analyses
[Sep 2] Michael J Casey, Coin Desk, A crypto fix for a broken international monetary system.
[Sep 2] Yengkhom Jilangamba, News18 Opinion, Not a solution to immigration problem, NRC final list has only brought to surface fault lines within society.
[Sep 2] Samuel Bendett, Defense One, What Russian Chatbots Think About Us.
[Sep 2] Shivani Singh, Hindustan Times, India’s no first use policy is a legacy that must be preserved.
[Sep 3] Abir Roy, Financial Express, Why a comprehensive law is needed for data protection.
[Sep 3] Dhirendra Kumar, The Economic Times, Aadhaar is back for mutual fund investments.
[Sep 3] Ashley Feng, Defense One, Welcome to the new phase of US-China tech competition.
[Sep 3] Nesrine Malik, The Guardian, The myth of the free speech crisis.
[Sep 3] Tom Wheeler and David Simpson, Brookings Institution, Why 5G requires new approaches to cybersecurity.
[Sep 3] Karen Roby, Tech Republic, Why cybersecurity is a big problem for small businesses.
[Sep 4] Wendy McElroy, Bitcoin.com, Crypto needs less regulation, not more.
[Sep 4] Natascha Gerlack and Elisabeth Macher, Modaq.com, US CLOUD Act’s potential impact on the GDPR.
[Sep 4] Peter Kafka, Vox, The US Government isn’t ready to regulate the internet. Today’s Google fine shows why.
[Sep 5] Murtaza Bhatia, Firstpost, Effective cybersecurity can help in accelerating business transformation.
[Sep 5] MG Devasahayam, The Tribune, Looking into human rights violations by Army.
[Sep 5] James Hadley, Forbes, Cybersecurity Frameworks: Not just for bits and bytes, but flesh and blood too.
[Sep 5] MR Subramani, Swarajya Magazine, Question at heart of TN’s ‘WhatsApp traceability case’: Are you endangering national security if you don’t link your social media account with Aadhaar?
[ Sep 5] Justin Sherman, Wired, Cold War Analogies are Warping Tech Policy.
[Sep 6] Nishtha Gautam, The Quint, Peer pressure, militant threats enforcing civil curfew in Kashmir?
[Sep 6] Harsh V Pant and Kartik Bommakanti, Foreign Policy, Modi reimagines the Indian military.
[Sep 6] Shuman Rana, Business Standard, Free speech in the crosshairs.
[Sep 6] David Gokhshtein, Forbes, Thoughts on American Crypto Regulation: Considering the Pros and Cons.
[Sep 6] Krishan Pratap Singh, NDTV Opinion, How to read Modi Government’s stand on Kashmir.