Introduction to AI Bias

By Nidhi Singh, CCG

Note: This article is adapted from an op-ed published in the Hindu Business Line which can be accessed here

A recent report by Nasscom talks about the integrated adoption of artificial intelligence (AI) and data utilisation strategy, which can add an estimated USD 500 billion to the Indian economy. In June 2022, Meity published the Draft National Data Governance Framework Policy, which aims to enhance the access, quality, and use of non-personal data in ‘line with the current emerging technology needs of the decade.’ This is another step, in the world-wide push by governments to adopt machine learning and AI models, which are trained on individuals’ data, into the sphere of governance. 

While India is currently considering the legislative and regulatory safeguards which must be implemented for the use of this data and its use in AI systems, many countries have begun implementing these AI systems. For example, in January 2021, the Dutch government resigned en masse in response to a child welfare fraud scandal that involved the alleged misuse of benefit schemes. 

The Dutch tax authorities used a ‘self-learning’ algorithm to assess benefit claims and classify them according to the potential risk for fraud. The algorithm flagged certain applications as being at a higher risk for fraud, and these applications were then forwarded to an official for manual scrutiny. While the officials would receive applications from the system stating that they had a higher likelihood of containing false claims, they were not told why the system flagged these applications as being high-risk. 

Following the adoption of an overly strict interpretation of the government policy on identifying fraudulent claims, the AI system being used by the tax authorities began to flag every data inconsistency — including actions like failure to sign a page of the form — as an act of fraud. Additionally, the Dutch government’s zero tolerance for tax fraud policy meant that the erroneously flagged families would have to return benefits not only from the time period in which the fraud was alleged to be committed but up to 5 years before that as well. Finally, the algorithm also learnt to systematically identify claims which were filed by parents with dual citizenship — as being high-risk. These were subsequently marked as potentially fraudulent. This meant that out of the people who were labelled as fraudsters by the algorithm, a disproportionately high number of them had an immigrant background. 

What makes the situation more complicated is that it is difficult to narrow down to a single factor that caused the ‘self-learning algorithm’ to arrive at the biassed output due to the ‘black box effect’ and the lack of transparency about how an AI system makes its decisions. This biassed output delivered by the AI system is an example of AI bias.

The problems of AI Bias

AI bias is said to occur when there is an anomaly in the output produced by a machine learning algorithm. This may be caused due to prejudiced assumptions made during the algorithm’s development process or prejudices in the training data. The concerns surrounding potential AI bias in the deployment of algorithms are not new. For almost a decade, researchers, journalists, activists, and even tech workers have repeatedly warned about the consequences of bias in AI. The process of creating a machine learning algorithm is based upon the concept of ‘training’. In a machine learning process, the computer is exposed to vast amounts of data, which it uses as a sample to study how to make judgements or predictions. For example, an algorithm designed to judge a beauty contest would be trained upon pictures and data relating to beauty pageants from the past. AI systems use algorithms made by human researchers, and if they are trained on flawed data sets, they may end up hardcoding bias into the system. In the example of the algorithm used for the beauty contest, the algorithm failed its desired objective as it eventually made its choice of winners based solely on skin colour, thereby excluding contestants who were not light-skinned.

This brings us to one of the most fundamental problems in AI systems – ‘Garbage in – Garbage out’. AI systems are heavily dependent on the use of accurate, clean, and well-labeled training data to learn from, which will, in turn, produce accurate and functional results. A vast majority of the time in the deployment of AI systems is spent in the process of preparing the data through processes like data collection, cleaning, preparation, and labeling, some of which tend to be very human-intensive. Additionally, AI systems are usually designed and operationalised by teams that tend to be more homogenous in their composition, that is to say, they are generally composed of white men. 

There are several factors that make AI bias hard to oppose. One of the main problems of AI systems is that the very foundations of these systems are often riddled with errors. Recent research has shown that ten key data sets, which are often used for machine learning and data science, including ImageNet (a large dataset of annotated photographs intended to be used as training data) are in fact riddled with errors. These errors can be traced to the quality of data the system was trained on or, for instance, biases being introduced by the labelers themselves, such as labelling more men as doctors and more women as nurses in pictures. 

How do we fix bias in AI systems?

This is a question that many researchers, technologists, and activists are trying to answer. Some of the more common approaches to this question include inclusivity – both in the context of data collection as well as the design of the system. There have also been calls about the need for increased transparency and explainability, which would allow people to understand how AI systems make their decisions. For example, in the case of the Dutch algorithm, while the officials received an assessment from the algorithm stating that the application was likely to be fraudulent, it did not provide any reasons as to why the algorithm detected fraud. If the officials in charge of the second round of review had more transparency about what the system would flag as an error, including missed signatures or dual citizenship, it is possible that they may have been able to mitigate the damage.

One possible mechanism to address the problem of bias is — the blind taste test mechanism – The mechanism works to check if the results produced by an AI system are dependent upon a specific variable such as sex, race, economic status or sexual orientation. Simply put, the mechanism tries to ensure that protected characteristics like gender, skin colour, or race should not play a role in decision-making processes.

The mechanism includes testing the algorithm twice, the first time with the variable, such as race, and the second time without it. Therefore in the first set, the model is trained on all the variables including race, and the second time the model is trained on all variables, excluding race.If the model returns the same results, then the AI system can be said to make predictions that are blind to the factor, but if the predictions change with the inclusion of a variable, such as by inclusion of dual citizenship status in the case of the Dutch algorithm, or the inclusion of skin colour in the beauty contest the AI system would have to be investigated for bias. This is just one of the potential mitigation tests. States are also experimenting with other technical interventions such as the use of synthetic data, which can be used to create less biased data sets. 

Where do we go from here 

The Dutch case is merely one of the examples in a long line of instances that warrant higher transparency and accountability requirements for the deployment of AI systems. There are many approaches that have been, and are still being developed and considered to counter bias in AI systems. However, the crux remains that it may be impossible to fully eradicate bias from AI systems due to the biased nature of human developers and engineers, which is bound to be reflected within technological systems. The effects of these biases can be devastating depending upon the context and the scale at which they are implemented. 

While new and emerging technical measures can be used as stopgaps, in order to comprehensively deal with bias in AI systems, we must address the issues of bias in those who design and operationalise the system. In the interim, regulators and states must step up to carefully scrutinise, regulate or in some cases halt the use of AI systems which are being used to provide essential services to people. An example of such regulation could include the framing and adoption of risk based assessment frameworks for the adoption of AI systems, wherein the regulatory requirements for AI systems are dependent upon the level of risk they pose to individuals. This could include permanently banning the deployment of AI systems in areas where AI systems may pose a threat to people’s safety, livelihood, or rights, such as credit scoring systems, or other systems which could manipulate human behaviour. For AI systems which are scored to be lower risk, such as AI chatbots being used for customer service, there may be a lower threshold for the prescribed safeguards. The debate on whether or not AI systems can ever truly be free from bias may never be fully answered; however, we can say that the harms that these biases cause can be mitigated with proper regulatory and technical measures. 

Technology and National Security Law Reflection Series Paper 6: The Legality of Lethal Autonomous Weapons Systems (“LAWS”)

Drishti Kaushik*

About the Author: The author is a final year student at the National Law University, Delhi. She has previously been associated with CCG as part of its summer school in March 2020 and has also worked with the Centre as a Research Assistant between September 2020 and March 2021. 

Editor’s note: This post is part of the Reflection Series showcasing exceptional student essays from CCG-NLUD’s Seminar Course on Technology & National Security Law.

Introduction

When a machine has the ability to perform certain tasks which typically require human intelligence it is known as Artificial Intelligence (AI). AI is currently used in a variety of fields and disciplines. One such field is the military where AI is viewed as a means to reduce human casualties.

One such use case is the development and use of Lethal Autonomous Weapons Systems (LAWS) or “killer robots” which can make life and death decisions without human intervention. Though the technology behind LAWS and its application remains foggy, LAWS have become a central point of debate globally. Several countries seek a complete preemptive ban on its use and development, by highlighting that technology to achieve such outcomes already exists. Other countries have expressed their preference for a moratorium on its development till there are universal standards regarding its production and usage. 

This piece examines whether LAWS are legal/lawful under International Humanitarian Law (IHL) as per the principles of distinction, proportionality and precautions. LAWS are understood as fully autonomous weapon systems that once activated, have the ability to select and engage targets without any human involvement. The author argues that it is premature to conclude LAWS as legal or illegal by hypothetically determining their compliance with extant humanitarian principles. Additionally, they pose ethical considerations and legal reviews under IHL that must be satisfied to determine the legality of LAWS. 

What are LAWS?

There is presently no universal definition of LAWS since the term ‘autonomous’ is ambiguous. ‘Autonomous’ in AI refers to the ability of a machine to make decisions without human intervention. The US’ Department of Defense issued a 2012 directive which defines LAWS as weapon systems that can autonomously or independently “select and engage targets without any human intervention” once activated. This means LAWS leave humans “out of the loop”. The “lack of human intervention” element is also present in definitions proposed by Human Rights Watch, the International Committee of the Red Cross (ICRC)  and the UK Defence Ministry. 

While weapon systems that are completely autonomous currently do not exist, the technology to develop the same does. There are near-autonomous weapons systems like Israel’s Iron Dome and the American Terminal High Altitude Area Defense that can identify and engage with incoming rockets. These are defensive in nature and protect sovereign nations from external attacks. Conversely, LAWS are weapon systems having offensive capabilities of pursuing targets. Some scholars recommend incentivizing defensive autonomous systems within the international humanitarian framework.

Even though there is no singular definition, LAWS can be identified as machines or weapon systems which once activated or switched on by humans have the autonomy to select and search for targets as well as engage or attack them without any human involvement in the entire selection and attacking process. The offensive nature of LAWS as opposed to the use of automated systems for defensive purposes is an important distinguishing factor for identifying LAWS. An open letter by Future of Life Institute calls for a ban on “offensive autonomous weapons beyond meaningful human control” instead of complete ban on AI in the military sector. This distinction between offensive and defensive weapons in the definition of LAWS was also raised in the Group of Governmental Experts on LAWS 2017 meet.

Autonomy and offensive characteristics are primary grounds behind demands for a complete ban on LAWS. Countries like Zimbabwe are uncomfortable with a machine making life and death decisions and others like Pakistan are worried about military disparities with technologically superior nations leading to an unfair balance of power.

There remains considerable uncertainty surrounding LAWS and its legality as weapons to be used in armed conflicts. Governance of these weapons, accountability, criminal liability and product liability are specific avenues of concern.

Autonomous anti-air unit by Pascal. Licensed under CC0.

Legal Issues under IHL

The legality of LAWS under IHL is observable at two levels: (a) development, and (b) deployment/use. 

Legal Review of New Weapons

The Geneva Convention provides for Legal Review of any new weapons or means of warfare under Article 36 of Additional Protocol I (“AP 1”) to determine whether the development of new weapons is in compliance with the Geneva Convention and customary international law. The weapon must not have an “indiscriminate effect” or cause “superfluous injury” or “unnecessary suffering” like chemical weapons.

The conduct of LAWS must have ‘predictability’ and ‘reliability’ for them to be legally deployed in armed conflicts. If not possible then the conduct of LAWS in the midst of conflict  may lead to “indiscriminate effect or superfluous injury or unnecessary suffering”. 

Principles of Distinction, Proportionality & Precautions 

LAWS must uphold the basic rule of distinction. LAWS should differentiate between civilians and military objects; and between those injured and those active in combat. Often even deployed troops are unable to successfully determine this and thus, programming LAWS to uphold the principle of distinction remains a challenge.

Second, LAWS must uphold the principle of proportionality. Civilian casualties, injury and destruction must not be excessive in comparison to the military advantage gained by the attack. Making such value judgments in the middle of intense battles is difficult. Programmers who develop LAWS may struggle to comprehend the complexities around these circumstances. Even when deploying deep learning, as machines recognise patterns, there might be situations when it first has to gain experience and those growing pains in technological refinement may lead to violations of the proportionality principle. 

Finally, LAWS must adhere to the principle of precaution. This is the ability to recall or suspend an act when it is not proportionate or harms civilians as opposed to military adversaries. The ability to deactivate or recall a weapon once deployed is tricky. There is general consensus that LAWS will fail to comply with these principles and violate the laws of armed conflict.

Conversely others argue that its autonomous characteristics are not enough to prove that LAWS violate IHL. Existing principles are enough to restrict the use of LAWS to situations where IHL is not violated. Furthermore, autonomous weapons might be able to wait till they are fired upon to determine whether a person is civilian or military as their sense of ‘self-preservation’ will not be as strong as that of human troops thereby complying with the principle of distinction. Moreover, they might be employed in the navy or other areas not open to civilians, thereby affording LAWS a lower threshold for compliance with IHL principles. Supporters contend that LAWS might calculate and make last minute decisions without any human subjective emotions allowing them to choose the best possible plan of action thereby respecting the principles of proportionality and precautions. 

Marten’s Clause 

Article 1 of AP I to the Geneva Conventions states that if certain cases are not covered under the Convention, then the civilians and the combatants are protected under “Customary International Law, principles of Humanity and Dictates of Public Conscience”. This has also been reiterated in the preamble of AP II of the Geneva Conventions. This is referred to as Marten’s Clause and provides the basis for ethical and moral aspects to the law of armed conflict. Since LAWS are not directly covered by the Geneva Convention, their development and use must be guided by Marten’s clause. Therefore, LAWS may be prohibited due to noncompliance with customary international law or principles of humanity or dictates of public conscience

LAWS cannot be declared illegal under customary international law since there is no defined state practice; as they are still being developed. The principles of humanity require us to examine questions about whether machines should have the ability to make life or death decisions regarding humans. Moreover, recent data suggests that dictates of public conscience may be skewed against the use of  LAWS. 

It might be early to term LAWS, which do not currently exist, as legal or illegal on the basis of compliance with the Geneva Convention. However, any discussion regarding the same must keep these legal and ethical IHL-related considerations in mind.

Present Policy Situation 

The legal issues relating to LAWS are recognised by the UN Office of Disarmament.  Under the Convention of Certain Conventional Weapons (CCW), a Group of Governmental Experts was asked to address the issues regarding LAWS. This group is yet to provide a singular definition of the term. However, it has recommended 11 guiding principles which were adopted by the High Contracting Parties to the CCW in 2019.

The first principle states that IHL shall apply to all autonomous weapons systems including LAWS. The second principle addresses accountability through “human responsibility” during decision making relating to the use of these systems. Further, any degree of human-machine interaction at any stage of development or activation must be in compliance with IHL. Accountability for development, deployment and use of these weapons must be as per IHL by ensuring there is a “chain of human command and control”. States’ obligation of ensuring a legal review for any new weapons is also reiterated.

The guidelines also state that cyber and physical risks, and the  risk of proliferation and acquisition by terrorists must be considered while developing and acquiring such weapons. Risk assessment and mitigation must also be made a part of the design and development of such weapons. Consideration must be given to compliance with IHL and other international obligations while using LAWS. While crafting policy measures, emerging technologies in LAWS must not be “anthropomorphized”. Discussions on LAWS should not hinder peaceful civilian innovations. The principles finally highlight the importance of balancing military needs and human factors under the CCW framework. 

The CCW also highlights the need for ensuring “meaningful human control” over weapon systems but does not define relevant criteria for the same. Additionally, there are different stages such as development, activation and deployment of autonomous weapons. Only a human can develop and activate the autonomous systems. However, deployment is determined by the  autonomous weapon on its own as per its human programming. 

Therefore, the question arises – will that level of human control over the LAWS’ programming be enough to qualify as meaningful human control? If not, will an override human command which may or may not be exercised allow for “meaningful human control”? These questions require further deliberation on what qualifies as “meaningful human control” and whether this control will even be enough given how rapidly AI is being developed. There is also a need to ensure that no bias is programmed into these weapons. 

While these guiding principles are the first step towards an international framework, there is still no universal/comprehensive legal framework to ensure accountability on LAWS.

 Conclusion

The legal, ethical and international concerns regarding LAWS must be addressed at a global level. A pre-emptive and premature ban might stifle helpful civilian innovation. Moreover, a ban will not be possible without the support of leading States like the US, Russia, UK. Conversely,  if the development of LAWS is left unregulated then it will make it easier for countries with LAWS to go to war. Moreover, development and deployment of LAWS will create a significant imbalance between the technologically advanced and technologically disadvantaged nations. Furthermore, no regulation may lead to the proliferation and acquisition of LAWS by bad actors for malicious, immoral and/or illegal purposes.

Since LAWS disarmament is not an option, control on LAWS is recommended. The issues with LAWS must be addressed at the international level by creating a binding treaty which incorporates a comprehensive definition of LAWS. The limits of autonomy must also be clearly demarcated along with other legal and ethical considerations. The principles of IHL including legal reviews must also be implemented. Till then, defense research centers around the world should incorporate AI in more “defensive” and “non-lethal” military machineries. Such applications could include disarming bombs or surveillance drones or smart borders instead of offensive and lethal autonomous weapons systems without any overriding human control.


*Views expressed in the blog are personal and should not be attributed to the institution.

Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part II

Post authored by Prateek Sibal

The previous post on “Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part I” discussed recent advancements in AI technologies that have led to new commercial applications with potentially adverse social implications. We also considered the challenges of AI governance and discussed the role of technical benchmarks for evaluating AI systems.

In this post, we will explore the different AI risk assessment approaches that can underpin AI regulation. This post will conclude with a discussion on the next steps for national AI governance initiatives.

Artificial Intelligence Risk Assessment Frameworks

Risk assessments can help identify the AI systems that need to be regulated.  Risk is determined by the severity of the impact of a problem and the probability of its occurrence. For example, the risk profile of a facial recognition system used to unlock a mobile phone would differ from a facial recognition system used by law enforcement in the public arena. The former may be beneficial as it adds a privacy-protecting security feature on the mobile phone. In contrast, the latter will have a chilling effect on free expression and privacy due to its mass surveillance capability. Therefore, the risk score for facial recognition systems will depend on their use and deployment context. This section will discuss some of the approaches followed by various bodies in developing risk assessment frameworks for AI systems.

European Commission’s approach

The European Commission’s legislative proposal on Artificial Intelligence classifies AI systems by four levels of risk and outline risk proportionate regulatory requirements. The categories proposed by the EU include:

  1. Unacceptable Risk: AI systems that pose a clear threat to people’s safety, livelihood, and rights fall under the category of unacceptable risk. The EU Commission has stated that applications that include social credit scoring systems and AI systems that can manipulate human behaviour will be banned.
  2. High Risk: AI systems that harm the safety or fundamental rights of people are categorised as high-risk. There are mandatory requirements for such systems, including the “quality of data sets used; technical documentation and record-keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity”. The EU will maintain an updated list of high-risk AI systems to respond to emerging challenges. At present, high-risk AI systems include AI algorithms used in transport systems, job hiring processes, border control and management, law enforcement, education systems, and democratic processes.
  3. Limited Risk: When the risks associated with the AI systems are limited, only transparency requirements are prescribed. For example, in the case of a customer engaging with an AI-based chatbot, the customer should be informed that they are interacting with an AI system.
  4. Minimal Risk: When the risk level is identified as minimal, there are no mandatory requirements, but the developers of such AI systems may voluntarily choose to follow industry standards. Examples of such applications include AI-enabled video games or spam filters.

The EU proposal bans real-time remote biometric identification like facial recognition systems installed in public spaces due to their adverse impact on fundamental rights like freedom of expression and privacy.

German approach

In Germany, the Data Ethics Commission has proposed a five-layer criticality pyramid that requires no regulation at a low-risk level to a complete ban at high-risk levels. Figure 2 presents the criticality pyramid and risk-adapted regulation framework for AI systems. The EU approach is similar to the German approach but differs in the number of levels.

Figure 2: Criticality pyramid and risk-adapted regulatory system for the use of algorithmic systems (Source: Opinion of the Data Ethics Commission)

UK approach

The AI Barometer Report of the Centre for Data Ethics and Innovation, tasked by the UK government to facilitate multistakeholder cooperation for developing the governance regime for data-driven technologies, identifies some common risks associated with AI systems and some sector-specific risks. The common risks include:

  1. Bias: Algorithmic bias and discrimination
  2. Explainability: Lack of explainability of AI systems
  3. Regulatory capacity: Regulatory capacities of the state, i.e. their capacity to develop and enforce regulation
  4. Data privacy: Breach in data privacy due to failure in user consent
  5. Public trust: Loss of public trust in institutions due to problematic AI and data use

The researchers identified that the severity of common risks varies across different sectors like criminal justice, financial services, health & social care, digital & social media and energy and utilities. For example, algorithmic bias leading to discrimination is considered high-risk in criminal justice, financial services, health and social media but medium risk in energy and utilities. The risk assignment, in this case, was done through expert discussions.

Organisation of Economic Cooperation and Development (OECD) approach

The OECD’s work on AI classification presents a model for classifying an AI system that can inform risk assessment under each class. The preliminary classification of AI systems developed by the OECD Network of Experts’ working group on AI classification has four dimensions:

  1. Context: The context in which an AI system is developed and deployed. Context includes stakeholders that deploy an AI system, the stakeholders impacted by its use and the sector in which an AI system is deployed.
  2. Data: Data and inputs to an AI system play a vital role in determining the system’s outputs based on the data classifiers used, the source of the data, its structure, scale, and how it was collected.
  3. Type of algorithm: The type of algorithms used in AI systems has implications for transparency, explainability, autonomy and privacy, among other principles. For example, an AI system can use a rules-based algorithm, which executes a series of pre-defined steps. Manufacturing robots used in assembly lines are an example of such a rules-based AI. In contrast, AI systems based on artificial neural networks (ANN) are inspired by the human brain’s structure and functions. These neural networks learn to solve problems by performing many iterations until they get the correct outcomes. In ANNs, the rules to reach a decision are developed by the AI model, and the decision-making process is opaque to humans.
  4. Task: The kind of task to be performed and the type of output expected vary across AI systems. AI systems can perform various tasks from forecasting, content personalisation to detection and recognition of voice or images.

Applying this classification framework to different cases, from facial recognition systems and medical devices to autonomous vehicles, allows us to understand the risks under each dimension and design appropriate regulation. In autonomous vehicles, the context of transportation and its significant risk of accidents increase the risk associated with AI systems. Such vehicles dynamically collect data and other inputs through sensors. They can suffer from security risks due to adversarial attacks where input data fed to the AI models can be tampered with, leading to accidents. The AI algorithms used in autonomous vehicles perform tasks like detecting road signs, deciding vehicle parameters like speed and direction, and responding to road conditions. If such decision-making happens without human control or oversight, it can pose significant risks to passengers and pedestrians’ lives. This example illustrates that autonomous vehicles can be considered a high-risk category requiring robust regulatory oversight to ensure public safety.

The four approaches to risk assessment discussed above are systematic attempts to understand AI-related risks and develop a foundation for downstream regulation that could address risks without being overly prescriptive.

Next Steps in Strengthening Risk-Adaptive Regulation for AI

This two-part blog series has framed the challenges of AI governance in terms of the Collingridge Dilemma concerning the social control of technology. Then it discussed the role of technical benchmarks in assessing the performance of AI systems vis. a vis. AI ethics principles. The section on AI risks assessment presents different approaches to identify AI applications and contexts that require regulation.

As the next step, national-level AI governance initiatives could work towards strengthening AI governance through:

  1. AI Benchmarking: Continuous development and updating of technical benchmarks for AI systems to assess their performance under different contexts with respect to AI ethics principles.
  2. Risk Assessments at the level of individual AI applications: Development of use cases and risk-assessment of different AI applications under different combinations of contexts, data and inputs, AI models and outputs.
  3. Systemic Risk Assessments: Analysis of risks at a systemic level, primarily when different AI systems interact. For example, in financial markets, AI algorithms interact with each other, and in certain situations, their interactions can cascade into a market crash.

Once AI risks are better understood, proportional regulatory approaches should be developed and subjected to Regulatory Impact Analysis (RIA). The OECD defines Regulatory Impact Analysis as a “systemic approach to critically assessing the positive and negative effects of proposed and existing regulations and non-regulatory alternatives”. RIAs can guide governments in understanding if the proposed regulations are effective and efficient in achieving the desired objective. As a complement to its legislative proposal for AI, the European Commission conducted an impact assessment of the proposed legislation and reported an aggregate compliance cost of between 100 and 500 million euros by 2025, mainly for high-risk AI applications that account for 5-15 per cent of all AI applications. The assessment analyses other factors like the impact of the legislation on the competitiveness of Small and Medium Enterprises (SMEs), additional budgetary responsibility on national governments and whether the measures proposed are proportionate to the objectives of the legislation. Such impact assessments are good regulatory practice and will be important as more countries work towards national AI legislations.

Finally, given the globalised nature of different AI services and products, countries should develop national-level regulatory approaches to AI in conversation with each other. Importantly, these dialogues at the global and national level should be multistakeholder driven to ensure that different perspectives inform any ensuing regulation. The pooling of knowledge and coordination on governing AI risks will lead to overall benefits by ensuring AI development in a manner that is ethically aligned while providing a stable environment for innovation and interoperability due to policy coherence.

The author would like to thank Jhalak Kakkar and Nidhi Singh for their helpful feedback.

This blog was written with the support of the Friedrich Naumann Foundation for Freedom.

Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part 1

Post authored by Prateek Sibal

In five years, between 2015 and 2020, 117 initiatives have published AI ethics principles worldwide. Despite a skewed geographical scope, with 91 of these initiatives emerging in Europe and North America, the proliferation of such initiatives on AI ethics principles paves the way for building global consensus on AI governance. Notably, the 37 OECD Member States have adopted the OECD AI Recommendation, the G20 has endorsed these principles, and the Global Partnership on AI is operationalising them. In the UN system, the United Nations Educational, Scientific and Cultural Organization (UNESCO) is developing a Recommendation on the Ethics of AI that 193 countries may adopt in 2021.

An analysis of different principles reveals a high-level consensus around eight themes: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. At the same time, ethical principles are criticised for lacking enforcement mechanisms. Companies often commit to AI ethics principles to improve their public image with little follow-up on implementing them; an exercise termed as “ethics washing”. Evidence also suggests that knowledge of the ethical tenets has little or no effect on whether software engineers factor in ethical principles in developing products or services.

Defining principles is essential, but it is only the first step for ethical AI governance. There is a need for mid-level norms, standards and guidelines at the international level that may inform regional or national regulation to translate principles into practice. This two-part blog will discuss the need for AI governance to evolve past the ‘ethics formation stage’ into concrete and tangible steps such as developing technical benchmarks and adopting risk-based regulation for AI systems.

Part one of the blog has three sections. The first section discusses some of the technical advances in AI technologies in recent years. These advances have led to new commercial applications with some potentially adverse social implications. Section two discusses the challenges of AI governance and presents a framework for mitigating the adverse implications of technology on society. Finally, section three discusses the role of technical benchmarks for evaluating AI systems. Part two of the blog will contain further discussion on risk assessment approaches to help identify the AI applications and contexts that need to be regulated.  It will also discuss the next steps for national initiatives for AI governance.

The blog follows the definition of an AI system proposed by the OECD’s AI Experts Group. They describe an AI system as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. It uses machine or human-based inputs to perceive real or virtual environments, abstract such perceptions into models (in an automated manner, e.g. with ML or manually), and use model inference to formulate options for information or action. AI systems are designed to operate with varying levels of autonomy.”

Recent Advances in AI Technologies

Artificial Intelligence is developing rapidly. It is important to lay down a broad overview of AI developments, which may have profound and potentially adverse impacts on individuals and society. The 2021 AI Index report notes four crucial technical advances that hastened the commercialisation of AI technologies:

  • AI-Generated Content: AI systems can generate high-quality text, audio and visual content to a level that it is difficult for humans to distinguish between synthetic and non-synthetic content.
  • Image Processing: Computer vision, a branch of computer science that “works on enabling computers to see, identify and process images in the same way that human vision does, and then provide appropriate output”, has seen immense progress in the past decade and is fast industrialising in applications that include autonomous vehicles.
  • Language Processing: Natural Language Processing (NLP) is a branch of computer science “concerned with giving computers the ability to understand the text and spoken words in much the same way human beings can”. NLP has advanced such that AI systems with language capabilities now have meaningful economic impact through live translations, captioning, and virtual voice assistants.
  • Healthcare and biology:DeepMind’s AlphaFold solved the decades-old protein folding problem using machine learning techniques. This breakthrough will allow the study of protein structure and will contribute to drug discovery.

These technological advances have social implications. For instance, the technology generating synthetic faces has rapidly improved. As shown in Figure 1, in 2014, AI systems produced grainy faces, but by 2017, they were generating realistic synthetic faces. Such AI systems have led to the proliferation of ‘deepfake’ pornography that overwhelmingly targets women and has the potential to erode people’s trust in information and videos they encounter online. Some actors misuse the deepfake technology to spread online disinformation, resulting in adverse implications for democracy and political stability. Such developments have made AI governance a pressing matter.


Figure 1: Improvement in AI-generated images. Source: https://arxiv.org/pdf/1802.07228.pdf

Challenges of AI Governance

In this blog, AI governance is understood as the development and application by governments, the private sector, and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape AI’s evolution and use. As highlighted in the previous section, the rapid advancements in the field of AI technologies have brought the need for better AI governance to the forefront.

In thinking about AI governance, a conundrum that preoccupies many governments worldwide concerns enactment of regulation that does not stifle innovation while also providing adequate safeguards to protect human rights and fundamental freedoms.

Technology regulation is complicated because until a technology has been extensively developed and widely used, its impact on society is difficult to predict. However, once it is deeply entrenched and its effect on society is understood better, it becomes more challenging to regulate the technology. This tension between free and unimpeded technology development and regulating adverse implications is termed the Collingridge dilemma.

David Collingridge, the author of the Social Control of Technologies, noted that when regulatory decisions have to be made under ignorance of technologies’ social impact, continuous monitoring of the impact of technology on society can help correct unexpected consequences early. Collingridge’s guidelines for decision-making under ignorance can inform AI governance as well. These include choosing technology options with:

  • Low failure costs: Selecting options with low error costs, i.e. if a policy or regulation fails to achieve its intended objective, the costs associated with failure are limited.
  • Quicker to correct: Selecting technologies with low response time for correction after the discovery of unanticipated problems.
  • Low cost of applying remedy: Selecting solutions with low cost of applying the remedy, i.e. options with a low fixed cost and a higher variable cost, should be given preference over the ones with a higher fixed cost, and
  • Continuous monitoring: Cost-effective and efficient monitoring can ensure the discovery of unpredicted consequences quickly.

For instance, the requirements around transparency in AI systems provide information for monitoring the impact of AI systems on society. Similarly, risk assessments of AI systems offer a pre-emptive form of oversight over technology development and use, which can help minimise potential social harms.  

Technical benchmarks for evaluating AI systems

To address ethical problems related to bias, discrimination, lack of transparency, and accountability in algorithmic decision-making,  quantitative benchmarks to assess AI systems’ performance against these ethical principles are needed.

The Institute of Electrical and Electronics Engineers (IEEE), through its Global Initiative on Ethics of Autonomous and Intelligent Systems, is developing technical standards, including on bias in AI systems. They describe “specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithms”. Similarly, in the United States, the National Institute of Standards and Technology (NIST) is developing standards for explainable AI based on principles that call for AI systems to provide reasons for their outputs in a manner that is understandable to individual users, explain the process used for generating the output, and deliver their decision only when the AI system is fully confident.

For example, there is significant progress in introducing benchmarks for the regulation of facial recognition technology. Facial recognition systems have a large commercial market. They and used for various tasks, including law enforcement and border controls. These tasks involve detecting visa photos, matching photos in criminal databases, and child abuse images. Such facial recognition systems have been the cause of significant concern due to high error rates in detecting faces and impinging on human rights. Biases in such systems have adverse consequences for individuals denied entry at borders or wrongfully incarcerated. In the United States, the National Institute of Standards and Technology’s Face Recognition Vendor Test provides a benchmark to compare different commercially available facial recognition systems’ performance by operating their algorithms on different image datasets.

The progress in defining benchmarks for ethical principles needs to be complemented by risk assessments of AI systems to pre-empt potentially adverse social impact in line with the Collingridge Dilemma discussed in the previous section. Risk assessments allow the categorisation of AI applications by their risk ratings. They can help develop risk-proportionate regulation for AI systems instead of blanket rules that may place an unnecessary compliance burden on technology development. The next blog in this two-part series will engage with potential risk-based approaches to AI regulation.

The author would like to thank Jhalak Kakkar and Nidhi Singh for their helpful feedback.

This blog was written with the support of the Friedrich Naumann Foundation for Freedom.

Building an AI Governance Framework for India, Part III

Embedding Principles of Privacy, Transparency and Accountability

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Aayog Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation that was held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG’s comments and analysis  on the Working Document can be accessed here.

In our first post in the series, ‘Building an AI governance framework for India’, we discussed the legal and regulatory implications of the Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its constitutional framework, and (2) based on clearly articulated overarching ‘Principles for Responsible AI’. Part II of the series discussed specific Principles for Responsible AI – Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. We explored the constituent elements of these principles and the avenues for incorporating them into the Indian regulatory framework. 

In this final post of the series, we will discuss the remaining principles of Privacy, Transparency and Accountability. 

Principle of Privacy 

Given the diversity of AI systems, the privacy risks which they pose to the individuals, and society as a whole are also varied. These may be be broadly related to : 

(i) Data protection and privacy: This relates to privacy implications of the use of data by AI systems and subsequent data protection considerations which arise from this use. There are two broad aspects to think about in terms of the privacy implications from the use of data by AI systems. Firstly, AI systems must be tailored to the legal frameworks for data protection. Secondly, given that AI systems can be used to re-identify anonymised data, the mere anonymisation of data for the training of AI systems may not provide adequate levels of protection for the privacy of an individual.

a) Data protection legal frameworks: Machine learning and AI technologies have existed for decades, however, it was the explosion in the availability of data, which accounts for the advancement of AI technologies in recent years. Machine Learning and AI systems depend upon data for their training. Generally, the more data the system is given, the more it learns and ultimately the more accurate it becomes. The application of existing data protection frameworks to the use of data by AI systems may raise challenges. 

In the Indian context, the Personal Data Protection Bill, 2019 (PDP Bill), currently being considered by Parliament, contains some provisions that may apply to some aspects of the use of data by AI systems. One such provision is Clause 22 of the PDP Bill, which requires data fiduciaries to incorporate the seven ‘privacy by design’ principles and embed privacy and security into the design and operation of their product and/or network. However, given that AI systems rely significantly on anonymised personal data, their use of data may not fall squarely within the regulatory domain of the PDP Bill. The PDP Bill does not apply to the regulation of anonymised data at large but the Data Protection Authority has the power to specify a code of practice for methods of de-identification and anonymisation, which will necessarily impact AI technologies’ use of data.

b) Use of AI to re-identify anonymised data: AI applications can be used to re-identify anonymised personal data. To safeguard the privacy of individuals, datasets composed of the personal data of individuals are often anonymised through a de-identification and sampling process, before they are shared for the purposes of training AI systems to address privacy concerns. However, current technology makes it possible for AI systems to reverse this process of anonymisation to re-identify people, having significant privacy implications for an individual’s personal data. 

(ii) Impact on society: The impact of the use of AI systems on society essentially relates to broader privacy considerations that arise at a societal level due to the deployment and use of AI, including mass surveillance, psychological profiling, and the use of data to manipulate public opinion. The use of AI in facial recognition surveillance technology is one such AI system that has significant privacy implications for society as a whole. Such AI technology enables individuals to be easily tracked and identified and has the potential to significantly transform expectations of privacy and anonymity in public spaces. 

Due to the varying nature of privacy risks and implications caused by AI systems, we will have to design various regulatory mechanisms to address these concerns. It is important to put in place a reporting and investigation mechanism that collects and analyses information on privacy impacts caused by the deployment of AI systems, and privacy incidents that occur in different contexts. The collection of this data would allow actors across the globe to identify common threads of failure and mitigate against potential privacy failures arising from the deployment of AI systems. 

To this end, we can draw on a mechanism that is currently in place in the context of reporting and investigating aircraft incidents, as detailed under Annexure 13 of the Convention on International Civil Aviation (Chicago Convention). It lays down the procedure for investigating aviation incidents and a reporting mechanism to share information between countries. The aim of this accident investigation report is not to apportion blame or liability from the investigation, but rather to extensively study the cause of the accident and prevent future incidents. 

A similar incident investigation mechanism may be employed for AI incidents involving privacy breaches. With many countries now widely developing and deploying AI systems, such a model of incident investigation would ensure that countries can learn from each other’s experiences and deploy more privacy-secure AI systems.

Principle of Transparency

The concept of transparency is a recognised prerequisite for the realisation of ‘trustworthy AI’. The goal of transparency in ethical AI is to make sure that the functioning of the AI system and resultant outcomes are non-discriminatory, fair, and bias mitigating, and that the AI system inspires public confidence in the delivery of safe and reliable AI innovation and development. Additionally, transparency is also important in ensuring better adoption of AI technology—the more users feel that they understand the overall AI system, the more inclined and better equipped they are to use it.

The level of transparency must be tailored to its intended audience. Information about the working of an AI system should be contextualised to the various stakeholder groups interacting and using the AI system. The Institute of Electrical and Electronics Engineers, a global professional organisation of electronic and electrical engineers,  suggested that different stakeholder groups may require varying levels of transparency in accordance with the target group. This means that groups such as users, incident investigators, and the general public would require different standards of transparency depending upon the nature of information relevant for their use of the AI system.

Presently, many AI algorithms are black boxes where automated decisions are taken, based on machine learning over training datasets, and the decision making process is not explainable. When such AI systems produce a decision, human end users don’t know how it arrived at its conclusions. This brings us to two major transparency problems, the public perception and understanding of how AI works, and how much developers actually understand about their own AI system’s decision making process. In many cases, developers may not know, or be able to explain how an AI system makes conclusions or how it has arrived at certain solutions.

This results in a lack of transparency. Some organisations have suggested opening up AI algorithms for scrutiny and ending reliance on opaque algorithms. On the other hand, the NITI Working Document is of the view that disclosing the algorithm is not the solution and instead, the focus should be on explaining how the decisions are taken by AI systems. Given the challenges around explainability discussed above, it will be important for NITI Aayog to discuss how such an approach will be operationalised in practice.

While many countries and organisations are researching different techniques which may be useful in increasing the transparency of an AI system, one of the common suggestions which have gained traction in the last few years is the introduction of labelling mechanisms in AI systems. An example of this is Google’s proposal to use ‘Model Cards’, which are intended to clarify the scope of the AI systems deployment and minimise their usage in contexts for which they may not be well suited. 

Model cards are short documents which accompany a trained machine learning model. They enumerate the benchmarked evaluation of the working of an AI system in a variety of conditions, across different cultural, demographic, and intersectional groups which may be relevant to the intended application of the AI system. They also contain clear information on an AI system’s capabilities including the intended purpose for which it is being deployed, conditions under which it has been designed to function, expected accuracy and limitations. Adopting model cards and other similar labelling requirements in the Indian context may be a useful step towards introducing transparency into AI systems. 

Principle of Accountability

The Principle of Accountability aims to recognise the responsibility of different organisations and individuals that develop, deploy and use the AI systems. Accountability is about responsibility, answerability and trust. There is no one standard form of accountability, rather this is dependent upon the context of the AI and the circumstances of its deployment.

Holding individuals and entities accountable for harm caused by AI systems has significant challenges as AI systems generally involve multiple parties at various stages of the development process. The regulation of the adverse impacts caused by AI systems often goes beyond the existing regimes of tort law, privacy law or consumer protection law. Some degree of accountability can be achieved by enabling greater human oversight. In order to foster trust in AI and appropriately determine the party who is accountable, it is necessary to build a set of shared principles that clarify responsibilities of each stakeholder involved with the research, development and implementation of an AI system ranging from the developers, service providers and end users.

Accountability has to be ensured at the following stages of an AI system: 

(i) Pre-deployment: It would be useful to implement an audit process before the AI system is deployed. A potential mechanism for implementing this could be a multi-stage audit process which is undertaken post design, but before the deployment of the AI system by the developer. This would involve scoping, mapping and testing a potential AI system before it is released to the public. This can include ensuring risk mitigation strategies for changing development environments and ensuring documentation of policies, processes and technologies used in the AI system.

Depending on the nature of the AI system and the potential for risk, regulatory guidelines can be developed prescribing the involvement of various categories of auditors such as internal, expert third party and from the relevant regulatory agency, at various stages of the audit. Such audits which are conducted pre-deployment are aimed at closing the accountability gap which exists currently.

(ii) During deployment: Once the AI system has been deployed, it is important to keep auditing the AI system to note the changes being made/evolution happening in the AI system in the course of its deployment. AI systems constantly learn from the data and evolve to become better and more accurate. It is important that the development team is continuously monitoring the system to capture any errors that may arise, including inconsistencies arising from input data or design features, and address them promptly.

(iii) Post-deployment: Ensuring accountability post-deployment in an AI system can be challenging. The NITI Working Document also recognised that assigning accountability for specific decisions becomes difficult in a scenario with multiple players in the development and deployment of an AI system. In the absence of any consequences for decisions harming others, no one party would feel obligated to take responsibility or take actions to mitigate the effect of the AI systems. Additionally, the lack of accountability also leads to difficulties in grievance redressal mechanisms which can be used to address scenarios where harm has arisen from the use of AI systems. 

The Council of Europe, in its guidelines on the human rights impacts of algorithmic systems, highlighted the need for effective remedies to ensure responsibility and accountability for the protection of human rights in the context of the deployment of AI systems. A potential model for grievance redressal is the redressal mechanism suggested in the AI4People’s Ethical Framework for a Good Society report by the Atomium – European Institute for Science, Media and Democracy. The report suggests that any grievance redressal mechanism for AI systems would have to be widely accessible and include redress for harms inflicted, costs incurred, and other grievances caused by the AI system. It must demarcate a clear system of accountability for both organisations and individuals. Of the various redressal mechanisms they have suggested, two significant mechanisms are: 

(a) AI ombudsperson: This would ensure the auditing of allegedly unfair or inequitable uses of AI reported by users of the public at large through an accessible judicial process. 

(b) Guided process for registering a complaint: This envisions laying down a simple process, similar to filing a Right to Information request, which can be used to bring discrepancies, or faults in an AI system to the notice of the authorities.

Such mechanisms can be evolved to address the human rights concerns and harms arising from the use of AI systems in India. 

Conclusion

In early October, the Government of India hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and a roadmap for social transformation, inclusion and empowerment through Responsible AI. At the RAISE Summit, speakers underlined the need for adopting AI ethics and a human centred approach to the deployment of AI systems. However, this conversation is still at a nascent stage and several rounds of consultations may be required to build these principles into an Indian AI governance and regulatory framework. 

As India enters into the next stage of developing and deploying AI systems, it is important to have multi-stakeholder consultations to discuss mechanisms for the adoption of principles for Responsible AI. This will enable the framing of an effective governance framework for AI in India that is firmly grounded in India’s constitutional framework. While the NITI Aayog Working Document has introduced the concept of ‘Responsible AI’ and the ethics around which AI systems may be designed, it lacks substantive discussion on these principles. Hence, in our analysis, we have explored global views and practices around these principles and suggested mechanisms appropriate for adoption in India’s governance framework for AI. Our detailed analysis of these principles can be accessed in our comments to the NITI Aayog’s Working Document Towards Responsible AI for All.

Building an AI Governance Framework for India, Part II

Embedding Principles of Safety, Equality and Non-Discrimination

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

In our previous post on building an AI governance framework for India, we discussed the legal and regulatory implications of the proposed Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its Constitutional framework and (2) based on clearly articulated overarching principles. While the NITI Working Document introduces certain principles, it does not go into any substantive details on what the adoption of these principles into India’s regulatory framework would entail.

We will now examine these ‘Principles for Responsible AI’, their constituent elements and avenues for incorporating them into the Indian regulatory framework. The NITI Working Document proposed the following seven ‘Principles for Responsible AI’ to guide India’s regulatory framework for AI systems: 

  1. Safety and reliability
  2. Equality
  3. Inclusivity and Non-Discrimination
  4. Privacy and Security 
  5. Transparency
  6. Accountability
  7. Protection and Reinforcement of Positive Human Values. 

This post explores the principles of Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. A subsequent post will discuss the principles of Privacy and Security, Transparency, Accountability and the Protection and Reinforcement of Positive Human Values.

Principle of Safety and Reliability

The Principle of Reliability and Safety aims to ensure that AI systems operate reliably in accordance with their intended purpose throughout their lifecycle and ensures the security, safety and robustness of an AI system. It requires that AI systems should not pose unreasonable safety risks, should adopt safety measures which are proportionate to the potential risks, should be continuously monitored and tested to ensure compliance with their intended purpose, and should have a continuous risk management system to address any identified problems. 

Here, it is important to note the distinction between safety and reliability. The reliability of a system relates to the ability of an AI system to behave exactly as its designers have intended and anticipated. A reliable system would adhere to the specifications it was programmed to carry out. Reliability is therefore, a measure of consistency and establishes confidence in the safety of a system. Whereas, safety refers to an AI system’s ability to do what it is supposed to do without harming users (human physical integrity), resources or the environment.

Human oversight: An important aspect of ensuring the safety and reliability of AI systems is the presence of human oversight over the system. Any regulatory framework that is developed in India to govern AI systems must incorporate norms that specify the circumstances and degree to which human oversight is required over various AI systems. 

The level of involvement of human oversight would depend upon the sensitivity of the function and potential for significant impact on an individual’s life which the AI system may have. For example, AI systems deployed in the context of the provision of government benefits should have a high level of human oversight. Decisions made by the AI system in this context should be reviewed by a human before being implemented. Other AI systems may be deployed in contexts that do not need constant human involvement. However, these systems should have a mechanism in place for human review if a question is subsequently raised for review by, say a user. An example of this may be vending machines which have simple algorithms. Hence, the purpose for which the system is deployed and the impact it could have on individuals would be relevant factors in determining if ‘human in the loop’, ‘human on the loop’, or any other oversight mechanism is appropriate. 

Principle of Equality

The principle of equality holds that everyone, irrespective of their status in the society, should get the same opportunities and protections with the development of AI systems. 

Implementing equality in the context of AI systems essentially requires three components: 

(i) Protection of human rights: AI instruments developed across the globe have highlighted that the implementation of AI would pose risks to the right to equality, and countries would have to take steps to mitigate such risks proactively. 

(ii) Access to technology: The AI systems should be designed in a way to ensure widespread access to technology, so that people may derive benefits from AI technology.

(iii) Guarantees of equal opportunities through technology: The guarantee of equal opportunity relies upon the transformative power of AI systems to “help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge” and “produce social and economic benefits for all by reducing social inequalities and vulnerabilities.” AI systems will have to be designed and deployed such that they further the guarantees of equal opportunity and do not exacerbate and further entrench existing inequality.

The development, use and deployment of AI systems in society would pose the above-mentioned risks to the right to equality, and India’s regulatory framework for AI must take steps to mitigate such risks proactively.

Principle of Inclusivity and Non-Discrimination

The idea of non-discrimination mostly arises out of technical considerations in the context of AI. It holds that non-discrimination and the prevention of bias in AI should be mitigated in the training data, technical design choices, or the technology’s deployment to prevent discriminatory impacts. 

Examples of this can be seen in data collection in policing, where the disproportionate attention paid to neighbourhoods with minorities, would show higher incidences of crime in minority neighbourhoods, thereby skewing AI results. Use of AI systems becomes safer when they are trained on datasets that are sufficiently broad, and the datasets encompass the various scenarios in which the system is envisaged to be deployed. Additionally, datasets should be developed to be representative and hence avoid discriminatory outcomes from the use of the AI system. 

Another example of this can be semi-autonomous vehicles which experience higher accident rates among dark-skinned pedestrians due to the software’s poorer performance in recognising darker-skinned individuals. This can be traced back to training datasets, which contained mostly light-skinned people. The lack of diversity in the data set can lead to discrimination against specific groups in society. To ensure effective non-discrimination, AI policies must be truly representative of the society in its training data and ensure that no section of the populace is either over-represented or under-represented, which may skew the data sets. While designing the AI systems for deployment in India, the constitutional rights of individuals should be used as central values around which the AI systems are designed. 

In order to implement inclusivity in AI, the diversity of the team involved in design as well as the diversity of the training data set would have to be assessed. This would involve the creation of guidelines under India’s regulatory framework for AI to help researchers and programmers in designing inclusive data sets, measuring product performance on the parameter of inclusivity, selecting features to avoid exclusion and testing new systems through the lens of inclusivity.

Checklist Model: To address the challenges of non-discrimination and inclusivity a potential model which can be adopted in India’s regulatory framework for AI would be the ‘Checklist’. The European Network of Equality Bodies (EQUINET), in its recent report on ‘Meeting the new challenges to equality and non-discrimination from increased digitisation and the use of Artificial Intelligence’ provides a checklist to assess whether an AI system is complying with the principles of equality and non-discrimination. The checklist consists of several broad categories, with a focus on the deployment of AI technology in Europe. This includes heads such as direct discrimination, indirect discrimination, transparency, other types of equity claims, data protection, liability issues, and identification of the liable party. 

The list contains a series of questions which judges whether an AI system meets standards of equality, and identifies any potential biases it may have. For example, the question “Does the artificial intelligence system treat people differently because of a protected characteristic?” includes the parameters of both direct data and proxies. If the answer to the question is yes, the system would be identified as indulging in indirect bias. A similar checklist system, which has been contextualised for India, can be developed and employed in India’s regulatory framework for AI. 

Way forward

This post highlights some of the key aspects of the principles of Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. Integration of these principles which have been identified in the NITI Working Document into India’s regulatory framework requires that we first clearly define their content, scope and ambit to identify the right mechanisms to operationalise them. Given the absence of any exploration of the content of these AI principles or the mechanism for their implementation in India in the NITI Working Document, we have examined the relevant international literature surrounding the adoption of AI ethics and suggested mechanisms for their adoption. The NITI Working Document has spurred discussion around designing an effective regulatory framework for AI. However, these discussions are at a preliminary stage and there is a need to develop a far more nuanced proposal for a regulatory framework for AI.

Over the last week, India has hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and roadmap for social transformation, inclusion and empowerment through Responsible AI. As we discuss mechanisms for India to effectively harness the economic potential of AI, we also need to design an effective framework to address the massive regulatory challenges emerging from the deployment of AI—simultaneously, and not as an afterthought post-deployment. While a few of the RAISE sessions engaged with certain aspects of regulating AI, there still remains a need for extensive, continued public consultations with a cross section of stakeholders to embed principles for Responsible AI in the design of an effective AI regulatory framework for India. 

For a more detailed discussion on these principles and their integration into the Indian context, refer to our comments to the NITI Aayog here. 

Building an AI governance framework for India

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a “Working Document: Towards Responsible AI for All” (“NITI Working Document/Working Document”). The Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

The Working Document highlights the potential of Artificial Intelligence (“AI”) in the Indian context. It attempts to identify the challenges that will be faced in the adoption of AI and makes some recommendations on how to address these challenges. The Working Document emphasises the economic potential of the adoption of AI in boosting India’s annual growth rate, its potential for use in the social sector (‘AI for All’) and the potential for India to export relevant social sector products to other emerging economies (‘AI Garage’). 

However, this is not the first time that the NITI Aayog has discussed the large-scale adoption of AI in India. In 2018, the NITI Aayog released a discussion paper on the “National Strategy for Artificial Intelligence” (“National Strategy”). Building upon the National Strategy, the Working Document attempts to delineate ‘Principles for Responsible AI’ and identify relevant policy and governance recommendations. 

Any framework for the regulation of AI systems needs to be based on clear principles. The ‘Principles for Responsible AI’ identified by the Working Document include the principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and the protection and reinforcement of positive human values. While the NITI Working Document introduces these principles, it does not go into any substantive details on the regulatory approach that India should adopt and what the adoption of these principles into India’s regulatory framework would entail. 

In a series of posts, we will discuss the legal and regulatory implications of the proposed Working Document and more broadly discuss the regulatory approach India should adopt to AI and the principles India should embed in it. In this first post, we map out key considerations that should be kept in mind in order to develop a comprehensive regulatory regime to govern the adoption and deployment of AI systems in India. Subsequent posts will discuss the various ‘Principles for Responsible AI’, their constituent elements and how we should think of incorporating them into the Indian regulatory framework.

Approach to building an AI regulatory framework 

While the adoption of AI has several benefits, there are several potential harms and unintended risks if the technology is not assessed adequately for its alignment with India’s constitutional principles and its impact on the safety of individuals. Depending upon the nature and scope of the deployment of an AI system, its potential risks can include the discriminatory impact on vulnerable and marginalised communities, and material harms such as the negative impact on the health and safety of individuals. In the case of deployments by the State, risks include violation of the fundamental rights to equality, privacy, freedom of assembly and association, and freedom of speech and expression. 

We highlight some of the regulatory considerations that should be considered below:

Anchoring AI regulatory principles within the constitutional framework of India

The use of AI systems has raised concerns about their potential to violate multiple rights protected under the Indian Constitution such as the right against discrimination, the right to privacy, the right to freedom of speech and expression, the right to assemble peaceably and the right to freedom of association. Any regulatory framework put in place to govern the adoption and deployment of AI technology in India will have to be in consonance with its constitutional framework. While the NITI Working Document does refer to the idea of the prevailing morality of India and its relation to constitutional morality, it does not comprehensively address the idea of framing AI principles in compliance with India’s constitutional principles.

For instance, the government is seeking to acquire facial surveillance technology, and the National Strategy discusses the use of AI-powered surveillance applications by the government to predict crowd behaviour and for crowd management. The use of AI powered surveillance systems such as these needs to be balanced with their impact on an individual’s right to freedom of speech and expression, privacy and equality. Operational challenges surrounding accuracy and fairness in these systems raise further concerns. Considering the risks posed to the privacy of individuals, the deployment of these systems by the government, if at all, should only be done in specific contexts for a particular purpose and in compliance with the principles laid down by the Supreme Court in the Puttaswamy case.

In the context of AI’s potential to exacerbate discrimination, it would be relevant to discuss the State’s use of AI systems for the sentencing of criminals and assessing recidivism. AI systems are trained on existing datasets. These datasets tend to contain historically biased, unequal and discriminatory data. We have to be cognizant of the propensity for historical bias’ and discrimination getting imported into AI systems and their decision making. This could further reinforce and exacerbate the existing discrimination in the criminal justice system towards marginalised and vulnerable communities, and result in a potential violation of their fundamental rights.

The National Strategy acknowledges the presence of such biases and proposes a technical approach to reduce bias. While such attempts are appreciable in their efforts to rectify the situation and yield fairer outcomes, such an approach disregards the fact that these datasets are biased because they arise from a biased, unequal and discriminatory world. As we seek to build effective regulation to govern the use and deployment of AI systems, we have to remember that these are socio-technical systems that reflect the world around us and embed the biases, inequality and discrimination inherent in the Indian society. We have to keep this broader Indian social context in mind as we design AI systems and create regulatory frameworks to govern their deployment. 

While, the Working Document introduces the principles for responsible AI such as equality, inclusivity and non-discrimination, and privacy and security, there needs to be substantive discussion around incorporating these principles into India’s regulatory framework in consonance with constitutional guaranteed rights.

Regulatory Challenges in the adoption of AI in India

As India designs a regulatory framework to govern the adoption and deployment of AI systems, it is important that we keep the following in focus: 

  • Heightened threshold of responsibility for government or public sector deployment of AI systems

The EU is considering adopting a risk-based approach for regulation of AI, with heavier regulation for high-risk AI systems. The extent of risk factors such as safety, consumer rights and fundamental rights are assessed by looking at the sector of deployment and the intended use of the AI system. Similarly, India must consider the adoption of a higher regulatory threshold for the use of AI by at least government institutions, given their potential for impacting citizen’s rights. Government use of AI systems that have the potential of severely impacting citizens’ fundamental rights include the use of AI in the disbursal of government benefits, surveillance, law enforcement and judicial sentencing

  • Need for overarching principles based AI regulatory framework

Different sectoral regulators are currently evolving regulations to address the specific challenges posed by AI in their sector. While it is vital to harness the domain expertise of a sectoral regulator and encourage the development of sector-specific AI regulations, such piecemeal development of AI principles can lead to fragmentation in the overall approach to regulating AI in India. Therefore, to ensure uniformity in the approach to regulating AI systems across sectors, it is crucial to put in place a horizontal overarching principles-based framework. 

  • Adaptation of sectoral regulation to effectively regulate AI

In addition to an overarching regulatory framework which forms the basis for the regulation of AI, it is equally important to envisage how this framework would work with horizontal or sector-specific laws such as consumer protection law and the applicability of product liability to various AI systems. Traditionally consumer protection and product liability regulatory frameworks have been structured around fault-based claims. However, given the challenges concerning explainability and transparency of decision making by AI systems, it may be difficult to establish the presence of defects in products and, for an individual who has suffered harm, to provide the necessary evidence in court. Hence, consumer protection laws may have to be adapted to stay relevant in the context of AI systems. Even sectoral legislation regulating the use of motor vehicles, such as the Motor Vehicles Act, 1988 would have to be modified to enable and regulate the use of autonomous vehicles and other AI transport systems. 

  • Contextualising AI systems for both their safe development and use

To ensure the effective and safe use of AI systems, they have to be designed, adapted and trained on relevant datasets depending on the context in which they will be deployed. The Working Document envisages India being the AI Garage for 40% of the world – developing AI solutions in India which can then be deployed in other emerging economies. Additionally, India will likely import AI systems developed in countries such as the US, EU and China to be deployed within the Indian context. Both scenarios involve the use of AI systems in a context distinct from the one in which they have been developed. Without effectively contextualising socio-technical systems like AI systems to the environment they are to be deployed in, there are enhanced safety, accuracy and reliability concerns. Regulatory standards and processes need to be developed in India to ascertain the safe use and deployment of AI systems that have been developed in contexts that are distinct from the ones in which they will be deployed. 

The NITI Working Document is the first step towards an informed discussion on the adoption of a regulatory framework to govern AI technology in India. However, there is a great deal of work to be done. Any regulatory framework developed by India to govern AI must balance the benefits and risks of deploying AI, diminish the risk of any harm and have a consumer protection framework in place to adequately address any harm that may arise. Besides this, the regulatory framework must ensure that the deployment and use of AI systems are in consonance with India’s constitutional scheme.

[September 30-October 7] CCG’s Week in Review Curated News in Information Law and Policy

Huawei finds support from Indian telcos in the 5G rollout as PayPal withdrew from Facebook’s Libra cryptocurrency project; Foreign Portfolio Investors moved MeitY against in the Data Protection Bill; the CJEU rules against Facebook in case relating to takedown of content globally; and Karnataka joins list of states considering implementing NRC to remove illegal immigrants – presenting this week’s most important developments in law, tech and national security.

Digital India

  • [Sep 30] Why the imminent global economic slowdown is a growth opportunity for Indian IT services firms, Tech Circle report.
  • [Sep 30] Norms tightened for IT items procurement for schools, The Hindu report.
  • [Oct 1] Govt runs full throttle towards AI, but tech giants want to upskill bureaucrats first, Analytics India Magazine report.
  • [Oct 3] – presenting this week’s most important developments in law, tech and national security. MeitY launches smart-board for effective monitoring of the key programmes, The Economic Times report.
  • [Oct 3] “Use human not artificial intelligence…” to keep a tab on illegal constructions: Court to Mumbai civic body, NDTV report.
  • [Oct 3] India took 3 big productivity leaps: Nilekani, Livemint report.
  • [Oct 4] MeitY to push for more sops to lure electronic makers, The Economic Times report; Inc42 report.
  • [Oct 4] Core philosophy of Digital India embedded in Gandhian values: Ravi Shankar Prasad, Financial Express report.
  • [Oct 4] How can India leverage its data footprint? Experts weigh in at the India Economic Summit, Quartz report.
  • [Oct 4] Indians think jobs would be easy to find despite automation: WEF, Tech Circle report.
  • [Oct 4] Telangana govt adopts new framework to use drones for last-mile delivery, The Economic Times report.
  • [Oct 5] Want to see ‘Assembled in India’ on an iPhone: Ravi Shankar Prasad, The Economic Times report.
  • [Oct 6] Home market gets attractive for India’s IT giants, The Economic Times report.

Internet Governance

  • [Oct 2] India Govt requests maximum social media content takedowns in the world, Inc42 report; Tech Circle report.
  • [Oct 3] Facebook can be forced to delete defamatory content worldwide, top EU court rules, Politico EU report.
  • [Oct 4] EU ruling may spell trouble for Facebook in India, The Economic Times report.
  • [Oct 4] TikTok, TikTok… the clock is ticking on the question whether ByteDance pays its content creators, ET Tech report.
  • [Oct 6] Why data localization triggers a heated debate, The Economic Times report.
  • [Oct 7] Sensitive Indian govt data must be stored locally, Outlook report.

Data Protection and Privacy

  • [Sep 30] FPIs move MeitY against data bill, seek exemption, ET markets report, Inc42 report; Financial Express report.
  • [Oct 1] United States: CCPA exception approved by California legislature, Mondaq.com report.
  • [Oct 1] Privacy is gone, what we need is regulation, says Infosys Kris Gopalakrishnana, News18 report.
  • [Oct 1] Europe’s top court says active consent is needed for tracking cookies, Tech Crunch report.
  • [Oct 3] Turkey fines Facebook $282,000 over data privacy breach, Deccan Herald report.

Free Speech

  • [Oct 1] Singapore’s ‘fake news’ law to come into force Wednesday, but rights group worry it could stifle free speech, The Japan Times report.
  • [Oct 2] Minister says Singapore’s fake news law is about ‘enabling’ free speech, CNBC report.
  • [Oct 3] Hong Kong protests: Authorities to announce face mask ban, BBC News report.
  • [Oct 3] ECHR: Holocaust denial is not protected free speech, ASIL brief.
  • [Oct 4] FIR against Mani Ratnam, Adoor and 47 others who wrote to Modi on communal violence, The News Minute report; Times Now report.
  • [Oct 5] UN asks Malaysia to repeal laws curbing freedom of speech, The New Indian Express report.
  • [Oct 6] When will our varsities get freedom of expression: PC, Deccan Herald report.
  • [Oct 6] UK Government to make university students sign contracts limiting speech and behavior, The Times report.
  • [Oct 7] FIR on Adoor and others condemned, The Telegraph report.

Aadhaar, Digital IDs

  • [Sep 30] Plea in SC seeking linking of social media accounts with Aadhaar to check fake news, The Economic Times report.
  • [Oct 1] Why another omnibus national ID card?, The Hindu Business Line report.
  • [Oct 2] ‘Kenyan court process better than SC’s approach to Aadhaar challenge’: V Anand, who testified against biometric project, LiveLaw report.
  • [Oct 3] Why Aadhaar is a stumbling block in Modi govt’s flagship maternity scheme, The Print report.
  • [Oct 4] Parliament panel to review Aadhaar authority functioning, data security, NDTV report.
  • [Oct 5] Could Aahdaar linking stop GST frauds?, Financial Express report.
  • [Oct 6] Call for liquor sale-Aadhaar linking, The New Indian Express report.

Digital Payments, Fintech

  • [Oct 7] Vision cash-lite: A billion UPI transactions is not enough, Financial Express report.

Cryptocurrencies

  • [Oct 1] US SEC fines crypto company Block.one for unregistered ICO, Medianama report.
  • [Oct 1] South Korean Court issues landmark decision on crypto exchange hacking, Coin Desk report.
  • [Oct 2] The world’s most used cryptocurrency isn’t bitcoin, ET Markets report.
  • [Oct 2] Offline transactions: the final frontier for global crypto adoption, Coin Telegraph report.
  • [Oct 3] Betting on bitcoin prices may soon be deemed illegal gambling, The Economist report.
  • [Oct 3] Japan’s financial regulator issues draft guidelines for funds investing in crypto, Coin Desk report.
  • [Oct 3] Hackers launch widespread botnet attack on crypto wallets using cheap Russian malware, Coin Desk report.
  • [Oct 4] State-backed crypto exchange in Venezuela launches new crypto debit cards, Decrypt report.
  • [Oct 4] PayPal withdraws from Facebook-led Libra crypto project, Coin Desk report.
  • [Oct 5] Russia regulates digital rights, advances other crypto-related bills, Bitcoin.com report.
  • [Oct 5] Hong Kong regulates crypto funds, Decrypt report.

Cybersecurity and Cybercrime

  • [Sep 30] Legit-looking iPhone lightening cables that hack you will be mass produced and sold, Vice report.
  • [Sep 30] Blackberry launches new cybersecurity development labs, Infosecurity Mgazine report.
  • [Oct 1] Cybersecurity experts warn that these 7 emerging technologies will make it easier for hackers to do their jobs, Business Insider report.
  • [Oct 1] US government confirms new aircraft cybersecurity move amid terrorism fears, Forbes report.
  • [Oct 2] ASEAN unites to fight back on cyber crime, GovInsider report; Asia One report.
  • [Oct 2] Adopting AI: the new cybersecurity playbook, TechRadar Pro report.
  • [Oct 4] US-UK Data Access Agreement, signed on Oct 3, is an executive agreement under the CLOUD Act, Medianama report.
  • [Oct 4] The lack of cybersecurity talent is ‘a  national security threat,’ says DHS official, Tech Crunch report.
  • [Oct 4] Millions of Android phones are vulnerable to Israeli surveillance dealer attack, Forbes report; NDTV report.
  • [Oct 4] IoT devices, cloud solutions soft target for cybercriminals: Symantec, Tech Circle report.
  • [Oct 6] 7 cybersecurity threats that can sneak up on you, Wired report.
  • [Oct 6] No one could prevent another ‘WannaCry-style’ attack, says DHS official, Tech Crunch report.
  • [Oct 7] Indian firms rely more on automation for cybersecurity: Report, ET Tech report.

Cyberwarfare

  • [Oct 2] New ASEAN committee to implement norms for countries behaviour in cyberspace, CNA report.

Tech and National Security

  • [Sep 30] IAF ready for Balakot-type strike, says new chief Bhadauria, The Hindu report; Times of India report.
  • [Sep 30] Naval variant of LCA Tejas achieves another milestone during its test flight, Livemint report.
  • [Sep 30] SAAB wants to offer Gripen at half of Rafale cost, full tech transfer, The Print report.
  • [Sep 30] Rajnath harps on ‘second strike capability’, The Shillong Times report.
  • [Oct 1] EAM Jaishankar defends India’s S-400 missile system purchase from Russia as US sanctions threat, International Business Times report.
  • [Oct 1] SC for balance between liberty, national security, Hindustan Times report.
  • [Oct 2] Startups have it easy for defence deals up to Rs. 150 cr, ET Rise report, Swarajya Magazine report.
  • [Oct 3] Huawei-wary US puts more pressure on India, offers alternatives to data localization, The Economic Times report.
  • [Oct 4] India-Russia missile deal: What is CAATSA law and its implications?, Jagran Josh report.
  • [Oct 4] Army inducts Israeli ‘tank killers’ till DRDO develops new ones, Defence Aviation post report.
  • [Oct 4] China, Russia deepen technological ties, Defense One report.
  • [Oct 4] Will not be afraid of taking decisions for fear of attracting corruption complaints: Rajnath Singh, New Indian Express report.
  • [Oct 4] At conclave with naval chiefs of 10 countries, NSA Ajit Doval floats an idea, Hindustan Times report.
  • [Oct 6] Pathankot airbase to finally get enhanced security, The Economic Times report.
  • [Oct 6] rafale with Meteor and Scalp missiles will give India unrivalled combat capability: MBDA, The Economic Times report.
  • [Oct 7] India, Bangladesh sign MoU for setting up a coastal surveillance radar in Bangladesh, The Economic Times report; Decaan Herald report.
  • [Oct 7] Indian operated T-90 tanks to become Russian army’s main battle tank, EurAsian Times report.
  • [Oct 7] IAF’s Sukhois to get more advanced avionics, radar, Defence Aviation post report.

Tech and Law Enforcement

  • [Sep 30] TMC MP Mahua Mitra wants to be impleaded in the WhatsApp traceability case, Medianama report; The Economic Times report.
  • [Oct 1] Role of GIS and emerging technologies in crime detection and prevention, Geospatial World.net report.
  • [Oct 2] TRAI to take more time on OTT norms; lawful interception, security issue now in focus, The Economic Times report.
  • [Oct 2[ China invents super surveillance camera that can spot someone from a crowd of thousands, The Independent report.
  • [Oct 4] ‘Don’t introduce end-to-end encryption,’ UK, US and Australia ask Facebook in an open letter, Medianama report.
  • [Oct 4] Battling new-age cyber threats: Kerala Police leads the way, The Week report.
  • [Oct 7] India govt bid to WhatsApp decryption gets push as UK,US, Australia rally support, Entrackr report.

Tech and Elections

  • [Oct 1] WhatsApp was extensively exploited during 2019 elections in India: Report, Firstpost report.
  • [Oct 3] A national security problem without a parallel in American democracy, Defense One report.

Internal Security: J&K

  • [Sep 30] BDC polls across Jammu, Kashmir, Ladakh on Oct 24, The Economic Times report.
  • [Sep 30] India ‘invaded and occupied Kashmir, says Malaysian PM at UN General Assembly, The Hindu report.
  • [Sep 30] J&K police stations to have CCTV camera surveillance, News18 report.
  • [Oct 1] 5 judge Supreme court bench to hear multiple pleas on Article 370, Kashmir lockdown today, India Today report.
  • [Oct 1] India’s stand clear on Kashmir: won’t accept third-party mediation, India Today report.
  • [Oct 1] J&K directs officials to ensure all schools reopen by Thursday, NDTV report.
  • [Oct 2]] ‘Depressed, frightened’: Minors held in Kashmir crackdown, Al Jazeera report.
  • [Oct 3] J&K: When the counting of the dead came to a halt, The Hindu report.
  • [Oct 3] High schools open in Kashmir, students missing, The Economic Times report.
  • [Oct 3] Jaishanakar reiterates India’s claim over Pakistan-occupied Kashmir, The Hindu report.
  • [Oct 3] Normalcy prevails in Jammu and Kashmir, DD News report.
  • [Oct 3] Kashmiri leaders will be released one by one, India Today report.
  • [Oct 4] India slams Turkey, Malaysia remarks on J&K, The Hindu report.
  • [Oct 5] India’s clampdown hits Kashmir’s Silicon Valley, The Economic Times report.
  • [Oct 5] Traffic cop among 14 injured in grenade attack in South Kashmir, NDTV report; The Economic Times report.
  • [Oct 6] Kashmir situation normal, people happy with Article 370 abrogation: Prkash Javadekar, Times of India report.
  • [Oct 7] Kashmir residents say police forcibly taking over their homes for CRPF troops, Huffpost India report.

Internal Security: Northeast/ NRC

  • [Sep 30] Giving total control of Assam Rifles to MHA will adversely impact vigil: Army to Govt, The Economic Times report.
  • [Sep 30] NRC list impact: Assam’s foreigner tribunals to have 1,600 on contract, The Economic Times report.
  • [Sep 30] Assam NRC: Case against Wipro for rule violation, The Hindu report; News18 report; Scroll.in report.
  • [Sep 30] Hindu outfits demand NRC in Karnataka, Deccan Chronicle report; The Hindustan Times report.
  • [Oct 1] Centre extends AFPSA in three districts of Arunachal Pradesh for six months, ANI News report.
  • [Oct 1] Assam’s NRC: law schools launch legal aid clinic for excluded people, The Hindu report; Times of India report; The Wire report.
  • [Oct 1] Amit Shah in Kolkata: NRC to be implemented in West Bengal, infiltrators will be evicted, The Economic Times report.
  • [Oct 1] US Congress panel to focus on Kashmir, Assam, NRC in hearing on human rights in South Asia, News18 report.
  • [Oct 1] NRC must for national security; will be implemented: Amit Shah, The Hindu Business Line report.
  • [Oct 2] Bengali Hindu women not on NRC pin their hope on promise of another list, citizenship bill, The Print report.
  • [Oct 3] Citizenship Amendment Bill has become necessity for those left out of NRC: Assam BJP president Ranjeet Das, The Economic Times report.
  • [Oct 3] BJP govt in Karnataka mulling NRC to identify illegal migrants, The Economic Times report.
  • [Oct 3] Explained: Why Amit Shah wants to amend the Citizenship Act before undertaking countrywide NRC, The Indian Express report.
  • [Oct 4] Duplicating NPR, NRC to sharpen polarization: CPM, Deccan Herald report.
  • [Oct 5] We were told NRC India’s internal issue: Bangladesh, Livemint report.
  • [Oct 6] Prasanna calls NRC ‘unjust law’, The New Indian Express report.

National Security Institutions

  • [Sep 30] CRPF ‘denied’ ration cash: Govt must stop ‘second-class’ treatment. The Quint report.
  • [Oct 1] Army calls out ‘prejudiced’ foreign report on ‘torture’, refutes claim, Republic World report.
  • [Oct 2] India has no extraterritorial ambition, will fulfill regional and global security obligations: Bipin Rawat, The Economic Times report.

More on Huawei, 5G

  • [Sep 30] Norway open to Huawei supplying 5G equipment, Forbes report.
  • [Sep 30] Airtel deploys 100 hops of Huawei’s 5G technology, The Economic Times report.
  • [Oct 1] America’s answer to Huawei, Foreign Policy report; Tech Circle report.
  • [Oct 1] Huawei buys access to UK innovation with Oxford stake, Financial Times report.
  • [Oct 3] India to take bilateral approach on issues faced by other countries with China: Jaishankar, The Hindu report.
  • [Oct 4] Bharti Chairman Sunil Mittal says India should allow Huawei in 5G, The Economic Times report
  • [Oct 6] 5G rollout: Huawei finds support from telecom industry, Financial Express report.

Emerging Tech: AI, Facial Recognition

  • [Sep 30] Bengaluru set to roll out AI-based traffic solution at all signals, Entrackr report.
  • [Sep 1] AI is being used to diagnose disease and design new drugs, Forbes report.
  • [Oct 1] Only 10 jobs created for every 100 jobs taken away by AI, The Economic Times report.
  • [Oct 2]Emerging tech is helping companies grow revenues 2x: report, ET Tech report.
  • [Oct 2] Google using dubious tactics to target people with ‘darker skin’ in facial recognition project: sources, Daily News report.
  • [Oct 2] Three problems posed by deepfakes that technology won’t solve, MIT Technology Review report.
  • [Oct 3] Getting a new mobile number in China will involve a facial recognition test, Quartz report.
  • [Oct 4] Google contractors targeting homeless people, college students to collect their facial recognition data: Report, Medianama report.
  • [Oct 4] More jobs will be created than are lost from the IA revolution: WEF AI Head, Livemint report.
  • [Oct 6] IIT-Guwahati develops AI-based tool for electric vehicle motor, Livemint report.
  • [Oct 7] Even if China misuses AI tech, Satya Nadella thinks blocking China’s AI research is a bad idea, India Times report.

Big Tech

  • [Oct 3] Dial P for privacy: Google has three new features for users, Times of India report.

Opinions and Analyses

  • [Sep 26] Richard Stengel, Time, We’re in the middle of a global disinformation war. Here’s what we need to do to win.
  • [Sep 29] Ilker Koksal, Forbes, The shift toward decentralized finance: Why are financial firms turning to crypto?
  • [Sep 30] Nistula Hebbar, The Hindu, Govt. views grassroots development in Kashmir as biggest hope for peace.
  • [Sep 30] Simone McCarthy, South China Morning Post, Could China’s strict cyber controls gain international acceptance?
  • [Sep 30] Nele Achten, Lawfare blog, New UN Debate on cybersecurity in the context of international security.
  • [Sep 30[ Dexter Fergie, Defense One, How ‘national security’ took over America.
  • [Sep 30] Bonnie Girard, The Diplomat, A firsrhand account of Huawei’s PR drive.
  • [Oct 1] The Economic Times, Rafale: Past tense but furture perfect.
  • [Oct 1] Simon Chandler, Forbes, AI has become a tool for classifying and ranking people.
  • [Oct 2] Ajay Batra, Business World, Rethink India! – MMRCA, ESDM & Data Privacy Policy.
  • [Oct 2] Carisa Nietsche, National Interest, Why Europe won’t combat Huawei’s Trojan tech.
  • [Oct 3] Aruna Sharma, Financial Express, The digital way: growth with welfare.
  • [Oct 3] Alok Prasanna Kumar, Medianama, When it comes to Netflix, the Government of India has no chill.
  • [Oct 3] Fredrik Bussler, Forbes, Why we need crypto for good.
  • [Oct 3] Panos Mourdoukoutas, Forbes, India changed the game in Kashmir – Now what?
  • [Oct 3] Grant Wyeth, The Diplomat, The NRC and India’s unfinished partition.
  • [Oct 3] Zak Doffman, Forbes, Is Huawei’s worst Google nightmare coming true?
  • [Oct 4] Oren Yunger, Tech Crunch, Cybersecurity is a bubble, but it’s not ready to burst.
  • [Oct 4] Minakshi Buragohain, Indian Express, NRS: Supporters and opposers must engage each other with empathy.
  • [Oct 4] Frank Ready, Law.com, 27 countries agreed on ‘acceptable’ cyberspace behavior. Now comes the hard part.
  • [Oct 4] Samir Saran, World economic Forum (blog), 3 reasons why data is not the new oil and why this matters to India.
  • [Oct 4] Andrew Marantz, The New York Times, Free Speech is killing us.
  • [Oct 4] Financial Times editorial, ECJ ruling risks for freedom of speech online.
  • [Oct 4] George Kamis, GCN, Digital transformation requires a modern approach to cybersecurity.
  • [Oct 4] Naomi Xu Elegant and Grady McGregor, Fortune, Hong King’s mask ban pits anonymity against the surveillance state.
  • [Oct 4] Prashanth Parameswaran, The Diplomat, What’s behind the new US-ASEAN cyber dialogue?
  • [Oct 5] Huong Le Thu, The Strategist, Cybersecurity and geopolitics: why Southeast Asia is wary of a Huawei ban.
  • [Oct 5] Hannah Devlin, The Guardian, We are hurtling towards a surveillance state: the rise of facial recognition technology.
  • [Oct 5] PV Navaneethakrishnan, The Hindu Why no takers? (for ME/M.Tech programmes).
  • [Oct 6] Aakar Patel, Times of India blog, Cases against PC, letter-writing celebs show liberties are at risk.
  • [Oct 6] Suhasini Haidar, The Hindu, Explained: How ill purchases from Russia affect India-US ties?
  • [Oct 6] Sumit Chakraberty, Livemint, Evolution of business models in the era of privacy by design.
  • [Oct 6] Spy’s Eye, Outlook, Insider threat management.
  • [Oct 6] Roger Marshall, Deccan Herald, Big oil, Big Data and the shape of water.
  • [Oct 6] Neil Chatterjee, Fortune, The power grid is evolving. Cybersecurity  must too.
  • [Oct 7] Scott W Pink, Modaq.com, EU: What is GDPR and CCPA and how does it impact blockchain?
  • [Oct 7] GN Devy, The Telegraph, Has India slid into an irreversible Talibanization of the mind?
  • [Oct 7] Susan Ariel Aaronson, South China Morning Post, The Trump administration’s approach to AI is not that smart: it’s about cooperation, not domination.

[September 23-30] CCG’s Week in Review: Curated News in Information Law and Policy

The deadline to link PAN cards with Aadhaar was extended to December 31 this week; the Election Commission ruled that voting rights of those excluded in the NRC process remain unaffected; the Home Minister proposed a digital census with multipurpose ID cards for 2021; and 27 nations including the US, UK and Canada issued joint statement urging for a rules-based order in cyberspace – presenting this week’s most important developments in law, technology and national security.

Aadhaar and Digital IDs

  • [Sep 23] Home Minister announces digital census in 2021, proposed multipurpose ID card, Entrackr report; Business Today report.
  • [Sep 24] NRIs can now apply for Aadhaar on arrival without 182-day wait, The Economic Times report.
  • [Sep 24] Aadhaar will be linked to driving license to avoid forgery: Ravi Shankar Prasad, The Indian Express report.
  • [Sep 24] One nation, one card? Amit Shah floats idea of all-in-one ID; here are all the problems with that idea, Medianama report; Money Control report.
  • [Sep 24] Explained: Is India likely to have a multipurpose national ID card? The Indian Express report.
  • [Sep 24] UIDAI nod to ‘voluntary’ use of Aadhaar for National Population Register rollout, The Economic Times report.
  • [Sep 24] Govt must decide on Aadhaar-social media linkage:SC, Deccan Herald report.
  • [Sep 25] New law needed for Aadhaar-social media linkage: UIDAI, The Economic Times report; Inc42 report.
  • [Sep 26] NPR process to include passport, voter ID, Aadhaar and other details, Business Standard report.
  • [Sep 27] Gang involved in making fake Aadhaar cards busted, The Tribune report.
  • [Sep 27] What will happen if you don’t link your PAN card with Aadhaar by Sep 20, The Quint report.
  • [Sep 27] Explained: The National Population Register, and the controversy around it, The Indian Express report.
  • [Sep 27] Aadhaar to weed out bogus social security beneficiaries in Karnataka, Deccan Herald report.
  • [Sep 29] Bajrang Dal wants Aadhaar mandatory at dandiya to keep ‘non-Hindus’ out, The Hindustan Times report; The Wire report.
  • [Sep 30] Kerala urges Centre to extend deadline to link ration cards with Aadhaar, The News Minute report.
  • [Sep 30] PAN-Aadhaar linking deadline extended to December 31, The Economic Times report.

Digital India 

  • [Sep 25] India’s regulatory approach should focus on the regulation of the ‘core’: IAMAI, Livemint report.
  • [Sep 27] India may have to offer sops to boost electronic manufacturing, ET Tech report; Inc42 report.
  • [Sep 27] Digital India, start-ups are priorities for $5 trillion economy: PM Modi, Medianama report.
  • [Sep 29] Tech giants aim to skill Indian govt officials in AI, cloud, ET CIO report.
  • [Sep 29] India’s share in IT, R&D biz up in 2 years: report, The Economic Times report.

Internet Governance

  • [Sep 24] Supreme Court to MeitY: What’s the status of intermediary guidelines? Tell us by Oct 15, Medianama report.
  • [Sep 26] Will not be ‘excessive’ with social media rules, ay Govt officials, Inc42 report.
  • [Sep 26] Government trying to balance privacy and security in draft IT intermediary norms, The Economic Times report.
  • [Sep 27] Citizens, tech companies served better with some regulation: Facebook India MD Ajit Mohan, ET Tech report; Inc42 report.
  • [Sep 27] Balance benefits of internet, data security: Google CEO Sundar Pichai, ET Tech report; Business Today report.

Free Speech

  • [Sep 25] Jadavpur University calls upon ‘stakeholders’ to ensure free speech on campus, The New Indian Express report.
  • [Sep 28] RSS raises objections to uncensored content of Maoj Bajpayee’s “The Family Man”, The Hindu report; Outlook report.

Privacy and Data Protection

  • [Sep 23] A landmark decision on Tuesday could radically reshape how Google’s search results work, Business Insider report.
  • [Sep 23] Google tightens its voice assistant rules amidst privacy backlash, Wired report.
  • [Sep 24] Dell rolls out new data protection storage appliances and capabilities, ZDNet report.
  • [Sep 24] ‘Right to be forgotten’ privacy rule is limited by Europe’s top court, The New York Times report; Live Law report.
  • [Sep 27] Nigeria launches investigation into Truecaller for potential breach of privacy, Medianama report.
  • [Sep 29] Right to be forgotten will be arduous as India frames data protection law, Business Standard report.
  • [Sep 30] FPIs move against data bill, seek exemption, ET Telecom report; Entrackr report.

Data Localisation

  • [Sep 26] Reconsider imposition of data localisation: IAMAI report, The Economic Times report.
  • [Sep 27] Why data is not oil: Here’s how India’s data localisation norms will hurt the economy, Inc42 report.

Digital Payments and Fintech

  • [Sep 23] RBI rider on credit bureau data access has Fintech in a quandary, ET Tech report.

Cryptocurrencies

  • [Sep 23] Facebook reveals Libra currency basket breakdown, Coin Desk report.
  • [Sep 23] The face of India’s crypto lobby readies for a clash, Ozy report.
  • [Sep 23] Why has Brazil’s Central Bank included crypto assets in trade balance? Coin Telegraph report.
  • [Sep 24] French retailers widening crypto acceptance, Tech Xplore report.
  • [Sep 26] Why crypto hoaxes are so successful, Quartz report.
  • [Sep 26] South Africa: the net frontier for crypto exchanges, Coin Telegraph report
  • [Sep 27] The crypto wars’ strange bedfellows, Forbes report.
  • [Sep 28] Crypto industry is already preparing for Google’s ‘quantum supremacy’, Decrypt report.
  • [Sep 29] How crypto gambling is regulated around the world, Coin Telegraph report.

Tech and Law Enforcement

  • [Sep 29] New WhatsApp and Facebook Encryption ‘Backdoors’ – What’s really going on, Forbes report.
  • [Sep 28] Facebook, WhatsApp will have to share messages with UK Government, Bloomberg report.
  • [Sep 23] Secret FBI subpoenas scoop up personal data from scores of companies, The New York Times report.
  • [Sep 23] ‘Don’t transfer the WhatsApp traceability case’, Internet Freedom Foundation asks Supreme Court, Medianama report.
  • [Sep 24] China offers free subway rides to citizens who register their face with surveillance system, The Independent report.
  • [Sep 24] Facial recognition technology in public housing prompts backlash, The New York Times report.
  • [Sep 24] Facebook-Aadhaar linkage and WhatsApp traceability: Supreme Court says government must frame rules, CNBC TV18 report.
  • [ep 27] Fashion that counters surveillance cameras, Business Times report.
  • [Sep 27] Unnao rape case: Delhi court directs Apple to give Sengar’s location details on day of alleged rape, Medianama report.
  • [Sep 27] Face masks to decoy t-shirts: the rise of anti-surveillance fashion, Times of India report.
  • [Sep 30] Battle for privacy and encryption: WhatsApp and government head for a showdown on access to messages, ET Prime report.
  • [Sep 29] Improving digital evidence sharing, Scottish Government news report; Public technology report.

Internal Security: J&K

  • [Sep 23] Government launches internet facilitation centre in Pulwama for students, Times of India report; Business Standard report.
  • [Sep 23] Army chief rejects ‘clampdown’ in Jammu and Kashmir, Times of India report.
  • [Sep 24] Rising power: Why India has faced muted criticism over its Kashmir policy, Business Standard report.
  • [Sep 24] ‘Restore Article 370, 35A in Jammu and Kashmir, withdraw army, paramilitary forces’: 5-member women’s group will submit demands to Amit Shah, Firstpost report.
  • [Sep 24] No normalcy in Kashmir, says fact finding team, The Hindu report.
  • [Sep 25] End clampdown: Kashmir media, The Telegraph report.
  • [Sep 25] Resolve Kashmir issue through dialogue and not through collision: Erdogan, The Economic Times report.
  • [Sep 25] Rajya Sabha deputy chair thwarts Pakistan’s attempt at Kashmir at Eurasian Conference, The Economic Times report.
  • [Sep 25] Pakistan leader will urge UN intervention in Kashmir, The New York Times report.
  • [Sep 25] NSA Ajit Doval back in Srinagar to review security situation, The Hindustan Times report.
  • [Sep 27] Communication curbs add fresh challenge to Kashmir counter-insurgency operations, News18 report.
  • [Sep 27] Fresh restrictions in parts of Kashmir, The Hindu report.
  • [Sep 27] US wants ‘rapid’ easing of Kashmir restrictions, Times of India report.
  • [Sep 27] Kashmir issue: Rescind action on Art. 370, OIC tells India, The Hindu report.
  • [Sep 28] India objects to China’s reference to J&K and Ladakh at UNGA, The Economic Times report; The Hindu report.
  • [Sep 29] Surveillance, area domination operations intensified in Kashmir, The Economic Times report; Financial Express report.
  • [Sep 29] Police impose restrictions in J&K after Imran Khan’s speech at UNGA, India Today report.

Internal Security: NRC and the North-East

  • [Sep 23] Assam framing cyber security policy to secure data related to NRC, police, services, The Economic Times report; Money Control report.
  • [Sep 24] BJP will tell SC that we reject this NRC, says Himanta Biswa Sarma, Business Standard report.
  • [Sep 24] Amit Shah to speak on NRC, Citizenship Amendment Bill in Kolkata on Oct 1, The Economic Times report.
  • [Sep 26] ‘Expensive’ legal battle for those rejected in Assam NRC final list, The Economic Times report.
  • [Sep 27] Scared of NRC? Come back in 2022, The Telegraph report.
  • [Sep 27] Voters left out of NRC will have right to vote, rules Election Commission, India Today report; The Wire report.
  • [Sep 27] NRC: Assam government announces 200 Foreigners Tribunals in 33 districts, Times Now report; Times of India report.
  • [Sep 28] Judge urges new FT members to examine NRC claims with utmost care, Times of India report.

National Security Legislation

  • [Sep 23] Centre will reintroduce Citizenship Bill in Parliament: Himanta Biswa Sarma, The Hindu report.
  • [Sep 26] National Security Guard: History, Functions and Operations, Jagran Josh report.
  • [Sep 28] Left parties seek revocation of decision on Article 370, The Tribune India report.

Tech and National Security

  • [Sep 25] Army to start using Artificial Intelligence in 2-3 years: South Western Army commander, The Print report; India Today report; The New Indian Express report; Financial Express report.
  • [Sep 23] Modi, Trump set new course on terrorism, border security, The Hindu report.
  • [Sep 23] PM Modi in the US” Trump promises more defence deals with India, military trade to go up, Financial Express report.
  • [Sep 23] Punjab police bust terror module supplied with weapons by drones from Pak, NDTV report.
  • [Sep 26] Lockheed Martin to begin supplying F-16 wings from Hyderabad plant in 2020, Livemint report.
  • [Sep 26] Drones used for cross-border arms infiltration in Punjab a national security issues, says Randhawa, The Hindu report.
  • [Sep 27] UK MoD sets up cyber team for secure innovation, UK Authority report.
  • [Sep 29] New tri-services special ops division, meant for surgical strikes, finishes first exercise today, The Print report.
  • [Sep 30] After Saudi attacks, India developing anti-drone technology to counter drone menace, Eurasian Times report.

Tech and Elections

  • [Sep 20] Microsoft will offer free Windows 7 support for US election officials through 2020, Cyber Scoop report.
  • [Sep 26] Social media platforms to follow ‘code of ethics’ in all future elections: EC, The Economic Times report.
  • [Sep 28] Why is EC not making ‘authentic’ 2019 Lok Sabha results public? The Quint report.

Cybersecurity

  • [Sep 24] Androids and iPhones hacked with just one WhatsApp click – and Tibetans are under attack, Forbes report.
  • [Sep 25] Sharp questions can help board oversee cybersecurity, The Wall Street Journal report.
  • [Sep 25] What we know about CrowdStrike, the cybersecurity firm trump mentioned in Ukraine call, and its billionaire CEO, Forbes report.
  • [Sep 25] 36% smaller firms witnessed data breaches in 2019 globally, ET Rise report.
  • [Sep 28] Defence Construction Canada hit by cyber attack – corporation’s team trying to restore full IT capability, Ottawa Citizen report.
  • [Sep 29] Experts call for collective efforts to counter cyber threats, The New Indian Express report.
  • [Sep 29] Microsoft spots malware that turns PCs into zombie proxies, ET Telecom report
  • [Sep 29] US steps up scrutiny of airplane cybersecurity, The Wall Street Journal report.

Cyberwarfare

  • [Sep 24] 27 countries sign cybersecurity pledge urging rules-based control over cyberspace in Joint Statement, with digs at China and Russia, CNN report; IT world Canada report; Meri Talk report.
  • [Sep 26] Cyber Peace Institute fills a critical need for cyber attack victims, Microsoft blog.
  • [Sep 29] Britain is ‘at war every day’ due to constant cyber attacks, Chief of the Defence Staff says, The Telegraph report.

Telecom and 5G

  • [Sep 27] Telcos’ IT investments intact, auto companies may slow pace: IBM exec, ET Tech report.
  • [Sep 29] Telecom players to lead digital transformation in India, BW Businessworld report.

More on Huawei

  • [Sep 22] Huawei confirms another nasty surprise for Mate 30 buyers, Forbes report.
  • [Sep 23] We’re on the same page with government on security: Huawei, The Economic Times report.
  • [Sep 24] The debate around 5G’s safety is getting in the way of science, Quartz report (paywall).
  • [Sep 24] Govt will take call on Huawei with national interest in mind: Telecom Secy, Business Standard report.
  • [Sep 24] Huawei enables 5G smart travel system at Beijing airport, Tech Radar report.
  • [Sep 25] Huawei 5G backdoor entry unproven, The Economic Times report.
  • [Sep 25] US prepares $1 bn fund to replace Huawei ban kit, Tech Radar report.
  • [Sep 26] Google releases large dataset of deepfakes for researchers, Medianama report.
  • [Sep 26] Huawei willing to license 5G technology to a US firm, The Hindu Business Line report; Business Standard report.
  • [Sep 26] Southeast Asia’s top phone carrier still open to Huawei 5G, Bloomberg report.
  • [Sep 29] Russia rolls out the red carpet for Huawei over 5G, The Economic Times report.

Emerging Tech and AI

  • [Sep 20] Google researchers have reportedly achieved “Quantum Supremacy”, Financial Times report; MIT Technology Review report
  • [Sep 23] Artificial Intelligence revolution in healthcare in India: All we need to know, The Hindustan Times report.
  • [Sep 23] A new joystick for the brain-controlled vehicles of the future, Defense One report.
  • [Sep 24] Computing and AI: Humanistic Perspectives from MIT, MIT News report.
  • [Sep 24] Emerging technologies such as AI, 5G posing threats to privacy, says report, China Daily report.
  • [Sep 25] Alibaba unveils chip developed for artificial intelligence era, Financial Times report.
  • [Sep 26] Pentagon wants AI to interpret ‘strategic activity around the globe, Defense One report.
  • [Sep 27] Only 10 jobs created for every 100 jobs taken away by AI, ET Tech report.
  • [Sep 27] Experts say these emerging technologies should concern us, Business Insider report.
  • [Sep 27] What is on the horizon for export controls on ‘emerging technologies’? Industry comments may hold a clue, Modaq.com report.
  • [Sep 27] India can become world leader in artificial intelligence: Vishal Sikka, Money Control report.
  • [Sep 27] Elon Musk issues a terrifying prediction of ‘AI robot swarms’ and huge threat to mankind, The Daily Express (UK) report
  • [Sep 27] Russia’s national AI Centre is taking shape, Defense One report.
  • [Sep 29] Explained: What is ‘quantum supremacy’, The Hindu report.
  • [Sep 29] Why are scientists so excited about a new quantum computing milestone?, Scroll.in report.
  • [Sep 29] Artificial Intelligence has a gender bias problem – just ask Siri, The Wire report.
  • [Sep 29] How AI is changing the landscape of digital marketing, Inc42 report.

Opinions and Analyses

  • [Sep 21] Wim Zijnenburg, Defense One, Time to Harden International Norms on Armed Drones.
  • [Sep 23] David Sanger and Julian Barnes, The New York Times, The urgent search for a cyber silver bullet against Iran.
  • [Sep 23] Neven Ahmad, PRIO Blog, The EU’s response to the drone age: A united sky.
  • [Sep 23] Bisajit Dhar and KS Chalapati Rao, The Wire, Why an India-US Free Trade Agreement would require New Delhi to reorient key policies.
  • [Sep 23] Filip Cotfas, Money Control, Five reasons why data loss prevention has to be taken seriously.
  • [Sep 23] NF Mendoza, Tech Republic, 10 policy principles needed for artificial intelligence.
  • [Sep 24] Ali Ahmed, News Click, Are Indian armed forces turning partisan? : The changing civil-military relationship needs monitoring.
  • [Sep 24] Editorial, Deccan Herald, A polity drunk on Aadhaar.
  • [Sep 24] Mike Loukides, Quartz, The biggest problem with social media has nothing to do with free speech.
  • [Sep 24] Ananth Padmanabhan, Medianama, Civilian Drones: Privacy challenges and potential resolution. 
  • [Sep 24] Celine Herwijer and Dominic Kailash Nath Waughray, World Economic Forum, How technology can fast-track the global goals.
  • [Sep 24] S. Jaishankar, Financial Times, Changing the status of Jammu and Kashmir will benefit all of India.
  • [Sep 24] Editorial, Livemint, Aadhaar Mark 2.
  • [Sep 24] Vishal Chawla, Analytics India Magazine, AI in Defence: How Indi compares to US, China, Russia and South Korea.
  • [Sep 25] Craig Borysowich, IT Toolbox, Origin of Markets for Artificial Intelligence.
  • [Sep 25] Sudeep Chakravarti, Livemint, After Assam, NRC troubles may visit ‘sister’ Tripura.
  • [Sep 25] DH Kass, MSSP Blog, Cyber Warfare: New Rules of Engagement?
  • [Sep 25] Chris Roberts, Observer, How artificial intelligence could make nuclear war more likely.
  • [Sep 25] Ken Tola, Forbes, What is cybersecurity?
  • [Sep 25] William Dixon and  Jamil Farshchi, World Economic Forum, AI is transforming cybercrime. Here’s how we can fight back.
  • [Sep 25] Patrick Tucker, Defense One, Big Tech bulks up its anti-extremism group. But will it do more than talk?
  • [Sep 26] Udbhav Tiwari, Huffpost India, Despite last year’s Aadhaar judgement, Indians have less privacy than ever.
  • [Sep 26] Sylvia Mishra, Medianama, India and the United States: The time has come to collaborate on commercial drones.
  • [Sep 26] Subimal Bhattacharjee, The Hindu Business Line, Data flows and our national security interests.
  • [Sep 26] Ram Sagar, Analytics India Magazine, Top countries that are betting big on AI-based surveillance.
  • [Sep 26] Patrick Tucker, Defense One, AI will tell future medics who lives and who dies on the battlefield.
  • [Sep 26] Karen Hao, MIT Technology Review, This is how AI bias really happens – and why it’s so hard to fix.
  • [Sep 27] AG Noorani, Frontline, Kashmir dispute: Domestic or world issue?
  • [Sep 27] Sishanta Talukdar, Frontline, Final NRC list: List of exclusion.
  • [Sep 27] Freddie Stuart, Open Democracy, How facial recognition technology is bringing surveillance capitalism to our streets.
  • [Sep 27] Paul de Havilland, Crypto Briefing, Did Bitcoin crash or dip? Crypto’s trajectory moving forward.
  • [Sep 28] John Naughton, The Guardian, Will advances in quantum computing affect internet security?
  • [Sep 28] Suhrith Parthasarathy, The Hindu, The top court and a grave of freedom.
  • [Sep 28] Kazim Rizvi, YourStory, Data Protection Authority: the cornerstone to implement data privacy.
  • [Sep 28] Shekhar Gupta, The Print, Modi has convinced the world that Kashmir is India’s internal affair – but they’re still watching.
  • [Sep 29] Indrani Bagchi, The Economic Times, Why india needs to tread carefully on Kashmir.
  • [Sep 29] Medha Dutta Yadav, The New Indian Express, Data: Brave new frontier.
  • [Sep 29] Jon Markman, Forbes, New cybersecurity companies have their heads in the cloud.
  • [Sep 29] Editorial, The New York Times, On cybersecurity: Two scoops of perspective.
  • [Sep 30] Kuldip Singh, The Quint, New IAF Chief’s appointment: Why RKS Bhadauria must tread lightly.
  • [Sep 30] Karishma Koshal, The Caravan, With the data-protection bill in limbo, these policies contravene the right to privacy.

[September 16-23] CCG’s Week in Review: Curated News in Information Law and Policy

Cybersecurity experts warned of a new ‘SIM jacking’ threat, the Kerala High Court recognizes a right to access internet as the internet shutdown in Kashmir entered its 50th day; more updates on the linkage of Aadhaar with voter IDs and social media as the Indian Army braces itself to adopt AI – presenting this week’s most important developments in law, tech and national security.

Aadhaar

  • [Sep 16] Here are the amendments the Election Commission wants to the Representation of the People Act for Aadhaar-Voter ID linkage, Medianama report.
  • [Sep 18] Why Maj. Gen. Vombatkere has challenged Aadhaar Amendment Act in the Supreme Court; On WhatsApp and traceability, Medianama report.
  • [Sep 19] Drop in Aadhaar enrolments in J&K, The Economic Times report.
  • [Sep 20] In-principle decision to link Aadhaar with GST registration, The Economic Times report.
  • [Sep 23] Aadhaar card is now mandatory for nominees of your EPF account, Livemint report.

Digital India

  • [Sep 18] Indo-US ICT working group to meet on Sept 30, Oct 1, Medianama report.
  • [Sep 17] NITI Aayog frames guidelines for automated inspection of vehicles, ET Auto report.
  • [Sep 17] What TikTok told MEITY about its intermediary status, data collection, and policies for children, Medianama report.
  • [Sep 18] Soon, lands will have Aadhaar-like unique numbers, The Economic Times report; Business Today report.
  • [Sep 18] Drones to be used to digitally map India: report, Medianama report.
  • [Sep 18] PMO panel to release policy to boost handset manufacturing in India: report, Medianama report.
  • [Sep 19] Karnataka to set up exclusive body to boost innovation, The Hindu report.
  • [Sep 20] ‘Right To Access Internet Is Part Of Right To Privacy And Right To Education’: Kerala HC, Live Law report; Hindu report; NDTV report.

Data Protection and Privacy

  • [Sep 15] Privacy debate between govt, Facebook continues; no winner yet, Money Control report.
  • [Sep 16] Singapore, Philippines sign MoU on personal data protection, The Manila Times report.
  • [Sep 16] Industry wants careful drafting of regulations on non-personal data, The Economic Times report.
  • [Sep 16] Here are the top three reasons why data protection is required in every business, Firstpost report.
  • [Sep 20] Sensitive, super-sensitive data must be stored locally in india: RS PRasad, Business Standard report.
  • [Sep 20] Yet another data leak in Indian government database, exoposes multiple citizen IDs, Inc42 report.
  • [Sep 22] Infosys co-founder Kris Gopalakrishnan to lead panel on protection of non-personal data, Financial Express report.

E-Commerce

  • [Sep 16] Odisha government makes e-marketplace mandatory for procurements, The New Indian Express report.
  • [Sep 16] US antitrust officials investigate Amazon’s marketplace practices, Medianama report.
  • [Sep 17] Ministry of COnsumer Affairs extends deadline for comments on draft E-Commerce Guidelines 2019 to October 31, Medianama report.

FinTech and Digital Payments

  • [Sep 16] WhatsApp to roll out its payment services by end of this year: report, Medianama report; The Economic Times report.
  • [Sep 18] RBI proposes norms to regulate payment gateways and payment aggregators, Entrackr report.
  • [Sep 19] Regulatory shock for fintech firms: RBI blocks unregulated access to consumer credit history, Entrackr report.
  • [Sep 19] DSCI, MeitY and Google India join hands for ‘Digital Payment Abhiyan’, The Economic Times report.

Cryptocurrencies

  • [Sep 16] The toss of a Bitcoin: How crypto ban will hurt 5 mn Indians, 20k Blockchain developers, The Economic Times report.
  • [Sep 16] US Sanctions three alleged crypto hacking groups from North Korea, Coin Desk report.
  • [Sep 16] Crypto firms assess how to comply with anti-money laundering standards, The Wall Street Journal report.
  • [Sep 19] Bitcoin and crypto wallets are now being targeted by malware, Forbes report.
  • [Sep 21] Weekends are for Altcoins when it comes to crypto market gains, ET Markets report.
  • [Sep 21] Chinese officials surprisingly chill on crypto, Decrypt report.

Cybersecurity

  • [Sep 13] Ransomware has a new target, Defense One report.
  • [Sep 16] Deep learning and machine learning to transform cybersecurity, Tech Wire Asia report.
  • [Sep 16] America needs a whole-of-society approach to cybersecurity. ‘Grand Challenges’ can help, Defense One report.
  • [Sep 17] Financial asset firm PCI ordered to pay $1.5 million for poor cybersecurity practices, ZD Net report.
  • [Sep 20] Current Act outdated, need to include cyber security in IT legal framework: DCA chief, The Indian Express report.
  • [Sep 20] 10% of IT budget should be used for cybersecurity: Rear Admiral Mohit Gupta, ET Times report.
  • [Sep 20] Once hacked, twice shy: How auto supplier Harman learned to fight cyber car jackers, ET Auto report.
  • [Sep 21] Cybersecurity a big opportunity for telcos, says IBM executive, The Economic Times report.
  • [Sep 23] Cybersecurity experts raise alarm over new SIM jacking threat, The New Indian Express report.
  • [Sep 23] Cybersecurity: Tackling the menace of phishing, Financial Express report.

Tech and Law Enforcement; Surveillance

  • [Sep 15] Facebook moots ‘prospective’ solution to WhatsApp issue; India stands firm on traceability, Business Today report; Livemint report.
  • [Sep 18] Chinese firms are driving the rise of AI surveillance across Africa, Quartz report.
  • [Sep 18] Documents reveal how Russia taps phone companies for surveillance, Tech Crunch report.
  • [Sep 20] WhatsApp traceability case petitioner asks court to remove Aadhaar from the plea, consider only ‘authorised govt proofs’, Medianama report; Inc42 report; Bar & Bench report.
  • [Sep 20] Chennai-based KPost says traceability is possible, wants to be impleaded in WhatsApp case, Medianama report.

Tech and National Security

  • [Sep 13] Pentagon’s former top hacker wants to inject some Silicon Valley into the defense industry, Defense One report.
  • [Sep 16] Here’s how startups are helping the Defence Ministry up its game, Money Control report.
  • [Sep 16] After 6 years in exile, Edward Snowden explains himself, Wired report.
  • [Sep 17] US tells Saudi Arabia oil attacks were launched from Iran, The Wall Street Journal report.
  • [Sep 17] Why Rafale jets may be inducted into IAF by next summer only, Livemint report.
  • [Sep 17] US Air Force to shift billions of dollars to network its weapons, Defense One report.
  • [Sep 18] India to achieve US$26 billion defence industry by 2025: Defence Minister, Business Standard report.
  • [Sep 18] Mitigating security risks from emerging technologies, Army Technology analysis.
  • [Sep 18] Revised draft defence procurement norms to be ready by November end, The Hindu report.
  • [Sep 20] The NSA is running a satellite hacking experiment, Defense One report.
  • [Sep 20] Army to host seminar on artificial intelligence next week; seeks to enhance lethality, The Economic Times report; India Today report; The New Indian Express report.
  • [Sep 20] Defence Procurement: Not a level playing field for private sector, PSUs still rule, Bharat Shakti report.
  • [Sep 20] Indian Air Force ‘accepts’ Rafale, formal hand over on Dussehra, Livemint report.
  • [Sep 22] Amid US-India blooming ties, Washington prepares to take down Indian air defence systems, EurAsian Times report.
  • [Sep 23] Government likely to order 36 more Rafale fighter jets, The Economic Times report.

Tech and Elections

  • [Sep 20] Social media companies raise concerns over Election Commission’s voluntary code of ethics, Medianama report.

Internal Security: J&K

  • [Sep 16] Supreme Court says normalcy to return to Kashmir but with national security in mind, India Today report.
  • [Sep 16] Farooq Abdullah booked under Public Safety Act, committee to decide duration of arrest: report, Financial Express report.
  • [Sep 17] Amnesty’s report on the (mis)use of Public Safety Act in J&K counters the govt’s narrative, Youth ki Awaaz report.
  • [Sep 18] China says Kashmir issue may not be a ‘major topic’ during Modi-Xi meet, Livemint report.
  • [Sep 19] In Pakistan-held Kashmir, growing calls for independence, The New York Times report.
  • [Sep 20] Kashmir residents say they are being charged by telcos despite no service, The Hindu report.
  • [Sep 20] UN Chief could discuss Kashmir issues at UNGA: UN spokesman, The Economic Times report.
  • [Sep 20] How military drones are becoming deadly weapons across the globe, The Economic Times report.
  • [Sep 22] Modi’s Digital India comes crashing down in Kashmir’s longest ever internet gag, The Wire report; The Hindu report.
  • [Sep 23] No clampdown in Kashmir, only communication line of terrorists stopped: Army Chief Bipin Rawat, India Today report.

Internal Security: NRC

  • [Sep 16] Those declared foreigners cannot file NRC appeal, say Assam govt, Hindustan Times report.
  • [Sep 18] NRC in Haryana, The Tribune report.
  • [Sep 18] NRC is an internal exercise, sovereign right of a country: EAM Jaishankar, Outlook report.
  • [Sep 18] Government will implement NRC across the country: Amit Shah, The Economic Times report.; Times of India report.
  • [Sep 21] NRC Officials issue public advisory against collection of identification documents, Guwahati Plus report.
  • [Sep 22] NRC-exluded Gurkhas not to approach foreigners’ Tribunals, seek empowered panel, The Hindu report; Times of India report.
  • [Sep 14] Final Assam NRC list, with 1.9 million exclusions, published online, Hindustan Times report.

National Security Law

  • [Sep 17] Pulwama to Aug 5: Delhi HC indicted govt for PSA arrests – in 80 pc cases, Financial Express report.
  • [Sep 16] What is the Public Safety Act under which Farooq Abdullah has been detained? News Nation report.
  • [Sep 16] 52 years on, still no sign of national defence university, The Times of India report.
  • [Sep 16] NSA Doval gets national security, foreign policy as PMO defines roles of top officials, The Asian Age report.

Big Tech

  • [Sep 15] Facebook VP Nick Clegg says India’s policies will decide the fate of the internet, Financial Express report.
  • [Sep 17] Facebook Establishes Structure and Governance for an Independent Oversight Board, Facebook Newsroom announcement; Medianama report.
  • [Sep 19] Facebook expands definition of terrorist organization to limit extremism, The New York Times report.
  • [Sep 22] Facebook is experimenting ith AI that lets you digitally get dressed, The Source report.
  • [Sep 23] Google braces for landmark global privacy ruling, Bloomberg report.

Telecom/5G

  • [Sep 16] 5G spectrum auction this year or in early 2020: Telecom Minister RS Prasad, Medianama report.
  • [Sep 20] TRAI opens consultation process for mergers and transfers in telecom sector, Medianama report.
  • [Sep 23] Indian masses have to wait 5-6 years to get true 5G experience, ET Telecom report.

More on Huawei

  • [Sep 17] Facing US ban, Huawei emerging as stronger tech competitor, The Hindu Business Line report, The Diplomat report.
  • [Sep 18] Huawei’s big test will be trying to sell a device with no Google apps outside China, Quartz report.
  • [Sep 18] Huawei users at risk as US blacklist cuts access to shared data on new cyber threats, Forbes report.
  • [Sep 20] Huawei makes sizeable 5G progress, bags 60 contracts: Ken Hu, The Economic Times report.
  • [Sep 21] Huawei unveils 5G training center in UK, ET Telecom report.

AI and Emerging Tech

  • [Sep 14] Artificial intelligence only goes so far in today’s economy, says MIT study, Forbes report.
  • [Sep 16] The US Govt will spend $1 bn on AI next year – not counting the Pentagon, Defense One report.
  • [Sep 18] Facial recognition systems to debut at Pune airport by 2020: report, Medianama report.
  • [Sep 18] AI stats news: AI is actively watching you in 75 countries, Forbes report.
  • [Sep 18] The Intel community ants to identify people from hundreds of yards away, Defense One report.
  • [Sep 19] Google setting up AI lab ‘Google Research India’ in Bengaluru, Entrackr report.
  • [Sep 20] India is planning a huge China-style facial recognition program, The Economic Times report.

Opinions and Analyses

  • [Sep 15] Nitin Pai, Livemint, The geopolitical profile of India tracks the economy’s trajectory.
  • [Sep 16] Paul Ravindranath, Tech Circle, Inclusion in technology is a compelling economic and business case.
  • [Sep 16] Markandey Katju, The Hindu, The litmus test for free speech.
  • [Sep 16] Vishal Chawla, Analytics India Magazine, What India can take away from Google’s settlement on employees’ freedom of expression.
  • [Sep 16] Editorial, Times of India, All talk: Fate of national defence university shows apathy towards defence modernisation.
  • [Sep 16] Jeff Hussey, Forbes, The gap between strong cybersecurity and demands for connectivity is getting massive.
  • [Sep 16] Kai Sedgwick, Bitcoin.com, How crypto became a gamblers paradise.
  • [Sep 17] Ajai Shukla, Business Standard, In picking strategic partners, the defence ministry isn’t spoilt for choice.
  • [Sep 17] Anthony Pfaff, Defense One, The Saudi-Oil attacks aren’t game changing. The Show how the Game has changed.
  • [Sep 17] Kayla Matthews, Security Boulevard, Who’s financially responsible for cybersecurity breaches?
  • [Sep 17] Anirudh Gotety, ET Markets, Check crypto trade, ban won’t help.
  • [Sep 17] PS Ahluwalia, Livemint, Rafale will add heft to IAF’s deterrence capabilities.
  • [Sep 17] Lorand Laksai, Privacy International, How China is supplying surveillance technology and training around the world.
  • [Sep 18] Tabish Khair, The Hindu, In Kashmir, shaking the apple tree.
  • [Sep 18] Catrin Nye, BBC News, Live facial recognition surveillance ‘must stop’ .
  • [Sep 18] Privacy International, the EU funds surveillance around the world: here’s what must be done about it.
  • [Sep 18] Joshua P Meltzer and Cameron F. Kerry, Brookings Institution, Cybersecurity and digital trade: Getting it right.
  • [Sep 19] Lt Gen HS Panag, The Print, Amit Shah’s political aim to recover PoK is not backed by India’s military capacity.
  • [Sep 20] Rifat Fareed, Al Jazeera, Farooq Abdullah’s arrest leaves India with few allies in Kashmir.
  • [Sep 22] Air Marshal (retd) M Matheswaran, Deccan Herald, Time for structural reforms, modernisation.