Introduction to AI Bias

By Nidhi Singh, CCG

Note: This article is adapted from an op-ed published in the Hindu Business Line which can be accessed here

A recent report by Nasscom talks about the integrated adoption of artificial intelligence (AI) and data utilisation strategy, which can add an estimated USD 500 billion to the Indian economy. In June 2022, Meity published the Draft National Data Governance Framework Policy, which aims to enhance the access, quality, and use of non-personal data in ‘line with the current emerging technology needs of the decade.’ This is another step, in the world-wide push by governments to adopt machine learning and AI models, which are trained on individuals’ data, into the sphere of governance. 

While India is currently considering the legislative and regulatory safeguards which must be implemented for the use of this data and its use in AI systems, many countries have begun implementing these AI systems. For example, in January 2021, the Dutch government resigned en masse in response to a child welfare fraud scandal that involved the alleged misuse of benefit schemes. 

The Dutch tax authorities used a ‘self-learning’ algorithm to assess benefit claims and classify them according to the potential risk for fraud. The algorithm flagged certain applications as being at a higher risk for fraud, and these applications were then forwarded to an official for manual scrutiny. While the officials would receive applications from the system stating that they had a higher likelihood of containing false claims, they were not told why the system flagged these applications as being high-risk. 

Following the adoption of an overly strict interpretation of the government policy on identifying fraudulent claims, the AI system being used by the tax authorities began to flag every data inconsistency — including actions like failure to sign a page of the form — as an act of fraud. Additionally, the Dutch government’s zero tolerance for tax fraud policy meant that the erroneously flagged families would have to return benefits not only from the time period in which the fraud was alleged to be committed but up to 5 years before that as well. Finally, the algorithm also learnt to systematically identify claims which were filed by parents with dual citizenship — as being high-risk. These were subsequently marked as potentially fraudulent. This meant that out of the people who were labelled as fraudsters by the algorithm, a disproportionately high number of them had an immigrant background. 

What makes the situation more complicated is that it is difficult to narrow down to a single factor that caused the ‘self-learning algorithm’ to arrive at the biassed output due to the ‘black box effect’ and the lack of transparency about how an AI system makes its decisions. This biassed output delivered by the AI system is an example of AI bias.

The problems of AI Bias

AI bias is said to occur when there is an anomaly in the output produced by a machine learning algorithm. This may be caused due to prejudiced assumptions made during the algorithm’s development process or prejudices in the training data. The concerns surrounding potential AI bias in the deployment of algorithms are not new. For almost a decade, researchers, journalists, activists, and even tech workers have repeatedly warned about the consequences of bias in AI. The process of creating a machine learning algorithm is based upon the concept of ‘training’. In a machine learning process, the computer is exposed to vast amounts of data, which it uses as a sample to study how to make judgements or predictions. For example, an algorithm designed to judge a beauty contest would be trained upon pictures and data relating to beauty pageants from the past. AI systems use algorithms made by human researchers, and if they are trained on flawed data sets, they may end up hardcoding bias into the system. In the example of the algorithm used for the beauty contest, the algorithm failed its desired objective as it eventually made its choice of winners based solely on skin colour, thereby excluding contestants who were not light-skinned.

This brings us to one of the most fundamental problems in AI systems – ‘Garbage in – Garbage out’. AI systems are heavily dependent on the use of accurate, clean, and well-labeled training data to learn from, which will, in turn, produce accurate and functional results. A vast majority of the time in the deployment of AI systems is spent in the process of preparing the data through processes like data collection, cleaning, preparation, and labeling, some of which tend to be very human-intensive. Additionally, AI systems are usually designed and operationalised by teams that tend to be more homogenous in their composition, that is to say, they are generally composed of white men. 

There are several factors that make AI bias hard to oppose. One of the main problems of AI systems is that the very foundations of these systems are often riddled with errors. Recent research has shown that ten key data sets, which are often used for machine learning and data science, including ImageNet (a large dataset of annotated photographs intended to be used as training data) are in fact riddled with errors. These errors can be traced to the quality of data the system was trained on or, for instance, biases being introduced by the labelers themselves, such as labelling more men as doctors and more women as nurses in pictures. 

How do we fix bias in AI systems?

This is a question that many researchers, technologists, and activists are trying to answer. Some of the more common approaches to this question include inclusivity – both in the context of data collection as well as the design of the system. There have also been calls about the need for increased transparency and explainability, which would allow people to understand how AI systems make their decisions. For example, in the case of the Dutch algorithm, while the officials received an assessment from the algorithm stating that the application was likely to be fraudulent, it did not provide any reasons as to why the algorithm detected fraud. If the officials in charge of the second round of review had more transparency about what the system would flag as an error, including missed signatures or dual citizenship, it is possible that they may have been able to mitigate the damage.

One possible mechanism to address the problem of bias is — the blind taste test mechanism – The mechanism works to check if the results produced by an AI system are dependent upon a specific variable such as sex, race, economic status or sexual orientation. Simply put, the mechanism tries to ensure that protected characteristics like gender, skin colour, or race should not play a role in decision-making processes.

The mechanism includes testing the algorithm twice, the first time with the variable, such as race, and the second time without it. Therefore in the first set, the model is trained on all the variables including race, and the second time the model is trained on all variables, excluding race.If the model returns the same results, then the AI system can be said to make predictions that are blind to the factor, but if the predictions change with the inclusion of a variable, such as by inclusion of dual citizenship status in the case of the Dutch algorithm, or the inclusion of skin colour in the beauty contest the AI system would have to be investigated for bias. This is just one of the potential mitigation tests. States are also experimenting with other technical interventions such as the use of synthetic data, which can be used to create less biased data sets. 

Where do we go from here 

The Dutch case is merely one of the examples in a long line of instances that warrant higher transparency and accountability requirements for the deployment of AI systems. There are many approaches that have been, and are still being developed and considered to counter bias in AI systems. However, the crux remains that it may be impossible to fully eradicate bias from AI systems due to the biased nature of human developers and engineers, which is bound to be reflected within technological systems. The effects of these biases can be devastating depending upon the context and the scale at which they are implemented. 

While new and emerging technical measures can be used as stopgaps, in order to comprehensively deal with bias in AI systems, we must address the issues of bias in those who design and operationalise the system. In the interim, regulators and states must step up to carefully scrutinise, regulate or in some cases halt the use of AI systems which are being used to provide essential services to people. An example of such regulation could include the framing and adoption of risk based assessment frameworks for the adoption of AI systems, wherein the regulatory requirements for AI systems are dependent upon the level of risk they pose to individuals. This could include permanently banning the deployment of AI systems in areas where AI systems may pose a threat to people’s safety, livelihood, or rights, such as credit scoring systems, or other systems which could manipulate human behaviour. For AI systems which are scored to be lower risk, such as AI chatbots being used for customer service, there may be a lower threshold for the prescribed safeguards. The debate on whether or not AI systems can ever truly be free from bias may never be fully answered; however, we can say that the harms that these biases cause can be mitigated with proper regulatory and technical measures. 

Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part II

Post authored by Prateek Sibal

The previous post on “Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part I” discussed recent advancements in AI technologies that have led to new commercial applications with potentially adverse social implications. We also considered the challenges of AI governance and discussed the role of technical benchmarks for evaluating AI systems.

In this post, we will explore the different AI risk assessment approaches that can underpin AI regulation. This post will conclude with a discussion on the next steps for national AI governance initiatives.

Artificial Intelligence Risk Assessment Frameworks

Risk assessments can help identify the AI systems that need to be regulated.  Risk is determined by the severity of the impact of a problem and the probability of its occurrence. For example, the risk profile of a facial recognition system used to unlock a mobile phone would differ from a facial recognition system used by law enforcement in the public arena. The former may be beneficial as it adds a privacy-protecting security feature on the mobile phone. In contrast, the latter will have a chilling effect on free expression and privacy due to its mass surveillance capability. Therefore, the risk score for facial recognition systems will depend on their use and deployment context. This section will discuss some of the approaches followed by various bodies in developing risk assessment frameworks for AI systems.

European Commission’s approach

The European Commission’s legislative proposal on Artificial Intelligence classifies AI systems by four levels of risk and outline risk proportionate regulatory requirements. The categories proposed by the EU include:

  1. Unacceptable Risk: AI systems that pose a clear threat to people’s safety, livelihood, and rights fall under the category of unacceptable risk. The EU Commission has stated that applications that include social credit scoring systems and AI systems that can manipulate human behaviour will be banned.
  2. High Risk: AI systems that harm the safety or fundamental rights of people are categorised as high-risk. There are mandatory requirements for such systems, including the “quality of data sets used; technical documentation and record-keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity”. The EU will maintain an updated list of high-risk AI systems to respond to emerging challenges. At present, high-risk AI systems include AI algorithms used in transport systems, job hiring processes, border control and management, law enforcement, education systems, and democratic processes.
  3. Limited Risk: When the risks associated with the AI systems are limited, only transparency requirements are prescribed. For example, in the case of a customer engaging with an AI-based chatbot, the customer should be informed that they are interacting with an AI system.
  4. Minimal Risk: When the risk level is identified as minimal, there are no mandatory requirements, but the developers of such AI systems may voluntarily choose to follow industry standards. Examples of such applications include AI-enabled video games or spam filters.

The EU proposal bans real-time remote biometric identification like facial recognition systems installed in public spaces due to their adverse impact on fundamental rights like freedom of expression and privacy.

German approach

In Germany, the Data Ethics Commission has proposed a five-layer criticality pyramid that requires no regulation at a low-risk level to a complete ban at high-risk levels. Figure 2 presents the criticality pyramid and risk-adapted regulation framework for AI systems. The EU approach is similar to the German approach but differs in the number of levels.

Figure 2: Criticality pyramid and risk-adapted regulatory system for the use of algorithmic systems (Source: Opinion of the Data Ethics Commission)

UK approach

The AI Barometer Report of the Centre for Data Ethics and Innovation, tasked by the UK government to facilitate multistakeholder cooperation for developing the governance regime for data-driven technologies, identifies some common risks associated with AI systems and some sector-specific risks. The common risks include:

  1. Bias: Algorithmic bias and discrimination
  2. Explainability: Lack of explainability of AI systems
  3. Regulatory capacity: Regulatory capacities of the state, i.e. their capacity to develop and enforce regulation
  4. Data privacy: Breach in data privacy due to failure in user consent
  5. Public trust: Loss of public trust in institutions due to problematic AI and data use

The researchers identified that the severity of common risks varies across different sectors like criminal justice, financial services, health & social care, digital & social media and energy and utilities. For example, algorithmic bias leading to discrimination is considered high-risk in criminal justice, financial services, health and social media but medium risk in energy and utilities. The risk assignment, in this case, was done through expert discussions.

Organisation of Economic Cooperation and Development (OECD) approach

The OECD’s work on AI classification presents a model for classifying an AI system that can inform risk assessment under each class. The preliminary classification of AI systems developed by the OECD Network of Experts’ working group on AI classification has four dimensions:

  1. Context: The context in which an AI system is developed and deployed. Context includes stakeholders that deploy an AI system, the stakeholders impacted by its use and the sector in which an AI system is deployed.
  2. Data: Data and inputs to an AI system play a vital role in determining the system’s outputs based on the data classifiers used, the source of the data, its structure, scale, and how it was collected.
  3. Type of algorithm: The type of algorithms used in AI systems has implications for transparency, explainability, autonomy and privacy, among other principles. For example, an AI system can use a rules-based algorithm, which executes a series of pre-defined steps. Manufacturing robots used in assembly lines are an example of such a rules-based AI. In contrast, AI systems based on artificial neural networks (ANN) are inspired by the human brain’s structure and functions. These neural networks learn to solve problems by performing many iterations until they get the correct outcomes. In ANNs, the rules to reach a decision are developed by the AI model, and the decision-making process is opaque to humans.
  4. Task: The kind of task to be performed and the type of output expected vary across AI systems. AI systems can perform various tasks from forecasting, content personalisation to detection and recognition of voice or images.

Applying this classification framework to different cases, from facial recognition systems and medical devices to autonomous vehicles, allows us to understand the risks under each dimension and design appropriate regulation. In autonomous vehicles, the context of transportation and its significant risk of accidents increase the risk associated with AI systems. Such vehicles dynamically collect data and other inputs through sensors. They can suffer from security risks due to adversarial attacks where input data fed to the AI models can be tampered with, leading to accidents. The AI algorithms used in autonomous vehicles perform tasks like detecting road signs, deciding vehicle parameters like speed and direction, and responding to road conditions. If such decision-making happens without human control or oversight, it can pose significant risks to passengers and pedestrians’ lives. This example illustrates that autonomous vehicles can be considered a high-risk category requiring robust regulatory oversight to ensure public safety.

The four approaches to risk assessment discussed above are systematic attempts to understand AI-related risks and develop a foundation for downstream regulation that could address risks without being overly prescriptive.

Next Steps in Strengthening Risk-Adaptive Regulation for AI

This two-part blog series has framed the challenges of AI governance in terms of the Collingridge Dilemma concerning the social control of technology. Then it discussed the role of technical benchmarks in assessing the performance of AI systems vis. a vis. AI ethics principles. The section on AI risks assessment presents different approaches to identify AI applications and contexts that require regulation.

As the next step, national-level AI governance initiatives could work towards strengthening AI governance through:

  1. AI Benchmarking: Continuous development and updating of technical benchmarks for AI systems to assess their performance under different contexts with respect to AI ethics principles.
  2. Risk Assessments at the level of individual AI applications: Development of use cases and risk-assessment of different AI applications under different combinations of contexts, data and inputs, AI models and outputs.
  3. Systemic Risk Assessments: Analysis of risks at a systemic level, primarily when different AI systems interact. For example, in financial markets, AI algorithms interact with each other, and in certain situations, their interactions can cascade into a market crash.

Once AI risks are better understood, proportional regulatory approaches should be developed and subjected to Regulatory Impact Analysis (RIA). The OECD defines Regulatory Impact Analysis as a “systemic approach to critically assessing the positive and negative effects of proposed and existing regulations and non-regulatory alternatives”. RIAs can guide governments in understanding if the proposed regulations are effective and efficient in achieving the desired objective. As a complement to its legislative proposal for AI, the European Commission conducted an impact assessment of the proposed legislation and reported an aggregate compliance cost of between 100 and 500 million euros by 2025, mainly for high-risk AI applications that account for 5-15 per cent of all AI applications. The assessment analyses other factors like the impact of the legislation on the competitiveness of Small and Medium Enterprises (SMEs), additional budgetary responsibility on national governments and whether the measures proposed are proportionate to the objectives of the legislation. Such impact assessments are good regulatory practice and will be important as more countries work towards national AI legislations.

Finally, given the globalised nature of different AI services and products, countries should develop national-level regulatory approaches to AI in conversation with each other. Importantly, these dialogues at the global and national level should be multistakeholder driven to ensure that different perspectives inform any ensuing regulation. The pooling of knowledge and coordination on governing AI risks will lead to overall benefits by ensuring AI development in a manner that is ethically aligned while providing a stable environment for innovation and interoperability due to policy coherence.

The author would like to thank Jhalak Kakkar and Nidhi Singh for their helpful feedback.

This blog was written with the support of the Friedrich Naumann Foundation for Freedom.

Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part 1

Post authored by Prateek Sibal

In five years, between 2015 and 2020, 117 initiatives have published AI ethics principles worldwide. Despite a skewed geographical scope, with 91 of these initiatives emerging in Europe and North America, the proliferation of such initiatives on AI ethics principles paves the way for building global consensus on AI governance. Notably, the 37 OECD Member States have adopted the OECD AI Recommendation, the G20 has endorsed these principles, and the Global Partnership on AI is operationalising them. In the UN system, the United Nations Educational, Scientific and Cultural Organization (UNESCO) is developing a Recommendation on the Ethics of AI that 193 countries may adopt in 2021.

An analysis of different principles reveals a high-level consensus around eight themes: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. At the same time, ethical principles are criticised for lacking enforcement mechanisms. Companies often commit to AI ethics principles to improve their public image with little follow-up on implementing them; an exercise termed as “ethics washing”. Evidence also suggests that knowledge of the ethical tenets has little or no effect on whether software engineers factor in ethical principles in developing products or services.

Defining principles is essential, but it is only the first step for ethical AI governance. There is a need for mid-level norms, standards and guidelines at the international level that may inform regional or national regulation to translate principles into practice. This two-part blog will discuss the need for AI governance to evolve past the ‘ethics formation stage’ into concrete and tangible steps such as developing technical benchmarks and adopting risk-based regulation for AI systems.

Part one of the blog has three sections. The first section discusses some of the technical advances in AI technologies in recent years. These advances have led to new commercial applications with some potentially adverse social implications. Section two discusses the challenges of AI governance and presents a framework for mitigating the adverse implications of technology on society. Finally, section three discusses the role of technical benchmarks for evaluating AI systems. Part two of the blog will contain further discussion on risk assessment approaches to help identify the AI applications and contexts that need to be regulated.  It will also discuss the next steps for national initiatives for AI governance.

The blog follows the definition of an AI system proposed by the OECD’s AI Experts Group. They describe an AI system as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. It uses machine or human-based inputs to perceive real or virtual environments, abstract such perceptions into models (in an automated manner, e.g. with ML or manually), and use model inference to formulate options for information or action. AI systems are designed to operate with varying levels of autonomy.”

Recent Advances in AI Technologies

Artificial Intelligence is developing rapidly. It is important to lay down a broad overview of AI developments, which may have profound and potentially adverse impacts on individuals and society. The 2021 AI Index report notes four crucial technical advances that hastened the commercialisation of AI technologies:

  • AI-Generated Content: AI systems can generate high-quality text, audio and visual content to a level that it is difficult for humans to distinguish between synthetic and non-synthetic content.
  • Image Processing: Computer vision, a branch of computer science that “works on enabling computers to see, identify and process images in the same way that human vision does, and then provide appropriate output”, has seen immense progress in the past decade and is fast industrialising in applications that include autonomous vehicles.
  • Language Processing: Natural Language Processing (NLP) is a branch of computer science “concerned with giving computers the ability to understand the text and spoken words in much the same way human beings can”. NLP has advanced such that AI systems with language capabilities now have meaningful economic impact through live translations, captioning, and virtual voice assistants.
  • Healthcare and biology:DeepMind’s AlphaFold solved the decades-old protein folding problem using machine learning techniques. This breakthrough will allow the study of protein structure and will contribute to drug discovery.

These technological advances have social implications. For instance, the technology generating synthetic faces has rapidly improved. As shown in Figure 1, in 2014, AI systems produced grainy faces, but by 2017, they were generating realistic synthetic faces. Such AI systems have led to the proliferation of ‘deepfake’ pornography that overwhelmingly targets women and has the potential to erode people’s trust in information and videos they encounter online. Some actors misuse the deepfake technology to spread online disinformation, resulting in adverse implications for democracy and political stability. Such developments have made AI governance a pressing matter.


Figure 1: Improvement in AI-generated images. Source: https://arxiv.org/pdf/1802.07228.pdf

Challenges of AI Governance

In this blog, AI governance is understood as the development and application by governments, the private sector, and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape AI’s evolution and use. As highlighted in the previous section, the rapid advancements in the field of AI technologies have brought the need for better AI governance to the forefront.

In thinking about AI governance, a conundrum that preoccupies many governments worldwide concerns enactment of regulation that does not stifle innovation while also providing adequate safeguards to protect human rights and fundamental freedoms.

Technology regulation is complicated because until a technology has been extensively developed and widely used, its impact on society is difficult to predict. However, once it is deeply entrenched and its effect on society is understood better, it becomes more challenging to regulate the technology. This tension between free and unimpeded technology development and regulating adverse implications is termed the Collingridge dilemma.

David Collingridge, the author of the Social Control of Technologies, noted that when regulatory decisions have to be made under ignorance of technologies’ social impact, continuous monitoring of the impact of technology on society can help correct unexpected consequences early. Collingridge’s guidelines for decision-making under ignorance can inform AI governance as well. These include choosing technology options with:

  • Low failure costs: Selecting options with low error costs, i.e. if a policy or regulation fails to achieve its intended objective, the costs associated with failure are limited.
  • Quicker to correct: Selecting technologies with low response time for correction after the discovery of unanticipated problems.
  • Low cost of applying remedy: Selecting solutions with low cost of applying the remedy, i.e. options with a low fixed cost and a higher variable cost, should be given preference over the ones with a higher fixed cost, and
  • Continuous monitoring: Cost-effective and efficient monitoring can ensure the discovery of unpredicted consequences quickly.

For instance, the requirements around transparency in AI systems provide information for monitoring the impact of AI systems on society. Similarly, risk assessments of AI systems offer a pre-emptive form of oversight over technology development and use, which can help minimise potential social harms.  

Technical benchmarks for evaluating AI systems

To address ethical problems related to bias, discrimination, lack of transparency, and accountability in algorithmic decision-making,  quantitative benchmarks to assess AI systems’ performance against these ethical principles are needed.

The Institute of Electrical and Electronics Engineers (IEEE), through its Global Initiative on Ethics of Autonomous and Intelligent Systems, is developing technical standards, including on bias in AI systems. They describe “specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithms”. Similarly, in the United States, the National Institute of Standards and Technology (NIST) is developing standards for explainable AI based on principles that call for AI systems to provide reasons for their outputs in a manner that is understandable to individual users, explain the process used for generating the output, and deliver their decision only when the AI system is fully confident.

For example, there is significant progress in introducing benchmarks for the regulation of facial recognition technology. Facial recognition systems have a large commercial market. They and used for various tasks, including law enforcement and border controls. These tasks involve detecting visa photos, matching photos in criminal databases, and child abuse images. Such facial recognition systems have been the cause of significant concern due to high error rates in detecting faces and impinging on human rights. Biases in such systems have adverse consequences for individuals denied entry at borders or wrongfully incarcerated. In the United States, the National Institute of Standards and Technology’s Face Recognition Vendor Test provides a benchmark to compare different commercially available facial recognition systems’ performance by operating their algorithms on different image datasets.

The progress in defining benchmarks for ethical principles needs to be complemented by risk assessments of AI systems to pre-empt potentially adverse social impact in line with the Collingridge Dilemma discussed in the previous section. Risk assessments allow the categorisation of AI applications by their risk ratings. They can help develop risk-proportionate regulation for AI systems instead of blanket rules that may place an unnecessary compliance burden on technology development. The next blog in this two-part series will engage with potential risk-based approaches to AI regulation.

The author would like to thank Jhalak Kakkar and Nidhi Singh for their helpful feedback.

This blog was written with the support of the Friedrich Naumann Foundation for Freedom.

Building an AI Governance Framework for India, Part II

Embedding Principles of Safety, Equality and Non-Discrimination

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

In our previous post on building an AI governance framework for India, we discussed the legal and regulatory implications of the proposed Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its Constitutional framework and (2) based on clearly articulated overarching principles. While the NITI Working Document introduces certain principles, it does not go into any substantive details on what the adoption of these principles into India’s regulatory framework would entail.

We will now examine these ‘Principles for Responsible AI’, their constituent elements and avenues for incorporating them into the Indian regulatory framework. The NITI Working Document proposed the following seven ‘Principles for Responsible AI’ to guide India’s regulatory framework for AI systems: 

  1. Safety and reliability
  2. Equality
  3. Inclusivity and Non-Discrimination
  4. Privacy and Security 
  5. Transparency
  6. Accountability
  7. Protection and Reinforcement of Positive Human Values. 

This post explores the principles of Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. A subsequent post will discuss the principles of Privacy and Security, Transparency, Accountability and the Protection and Reinforcement of Positive Human Values.

Principle of Safety and Reliability

The Principle of Reliability and Safety aims to ensure that AI systems operate reliably in accordance with their intended purpose throughout their lifecycle and ensures the security, safety and robustness of an AI system. It requires that AI systems should not pose unreasonable safety risks, should adopt safety measures which are proportionate to the potential risks, should be continuously monitored and tested to ensure compliance with their intended purpose, and should have a continuous risk management system to address any identified problems. 

Here, it is important to note the distinction between safety and reliability. The reliability of a system relates to the ability of an AI system to behave exactly as its designers have intended and anticipated. A reliable system would adhere to the specifications it was programmed to carry out. Reliability is therefore, a measure of consistency and establishes confidence in the safety of a system. Whereas, safety refers to an AI system’s ability to do what it is supposed to do without harming users (human physical integrity), resources or the environment.

Human oversight: An important aspect of ensuring the safety and reliability of AI systems is the presence of human oversight over the system. Any regulatory framework that is developed in India to govern AI systems must incorporate norms that specify the circumstances and degree to which human oversight is required over various AI systems. 

The level of involvement of human oversight would depend upon the sensitivity of the function and potential for significant impact on an individual’s life which the AI system may have. For example, AI systems deployed in the context of the provision of government benefits should have a high level of human oversight. Decisions made by the AI system in this context should be reviewed by a human before being implemented. Other AI systems may be deployed in contexts that do not need constant human involvement. However, these systems should have a mechanism in place for human review if a question is subsequently raised for review by, say a user. An example of this may be vending machines which have simple algorithms. Hence, the purpose for which the system is deployed and the impact it could have on individuals would be relevant factors in determining if ‘human in the loop’, ‘human on the loop’, or any other oversight mechanism is appropriate. 

Principle of Equality

The principle of equality holds that everyone, irrespective of their status in the society, should get the same opportunities and protections with the development of AI systems. 

Implementing equality in the context of AI systems essentially requires three components: 

(i) Protection of human rights: AI instruments developed across the globe have highlighted that the implementation of AI would pose risks to the right to equality, and countries would have to take steps to mitigate such risks proactively. 

(ii) Access to technology: The AI systems should be designed in a way to ensure widespread access to technology, so that people may derive benefits from AI technology.

(iii) Guarantees of equal opportunities through technology: The guarantee of equal opportunity relies upon the transformative power of AI systems to “help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge” and “produce social and economic benefits for all by reducing social inequalities and vulnerabilities.” AI systems will have to be designed and deployed such that they further the guarantees of equal opportunity and do not exacerbate and further entrench existing inequality.

The development, use and deployment of AI systems in society would pose the above-mentioned risks to the right to equality, and India’s regulatory framework for AI must take steps to mitigate such risks proactively.

Principle of Inclusivity and Non-Discrimination

The idea of non-discrimination mostly arises out of technical considerations in the context of AI. It holds that non-discrimination and the prevention of bias in AI should be mitigated in the training data, technical design choices, or the technology’s deployment to prevent discriminatory impacts. 

Examples of this can be seen in data collection in policing, where the disproportionate attention paid to neighbourhoods with minorities, would show higher incidences of crime in minority neighbourhoods, thereby skewing AI results. Use of AI systems becomes safer when they are trained on datasets that are sufficiently broad, and the datasets encompass the various scenarios in which the system is envisaged to be deployed. Additionally, datasets should be developed to be representative and hence avoid discriminatory outcomes from the use of the AI system. 

Another example of this can be semi-autonomous vehicles which experience higher accident rates among dark-skinned pedestrians due to the software’s poorer performance in recognising darker-skinned individuals. This can be traced back to training datasets, which contained mostly light-skinned people. The lack of diversity in the data set can lead to discrimination against specific groups in society. To ensure effective non-discrimination, AI policies must be truly representative of the society in its training data and ensure that no section of the populace is either over-represented or under-represented, which may skew the data sets. While designing the AI systems for deployment in India, the constitutional rights of individuals should be used as central values around which the AI systems are designed. 

In order to implement inclusivity in AI, the diversity of the team involved in design as well as the diversity of the training data set would have to be assessed. This would involve the creation of guidelines under India’s regulatory framework for AI to help researchers and programmers in designing inclusive data sets, measuring product performance on the parameter of inclusivity, selecting features to avoid exclusion and testing new systems through the lens of inclusivity.

Checklist Model: To address the challenges of non-discrimination and inclusivity a potential model which can be adopted in India’s regulatory framework for AI would be the ‘Checklist’. The European Network of Equality Bodies (EQUINET), in its recent report on ‘Meeting the new challenges to equality and non-discrimination from increased digitisation and the use of Artificial Intelligence’ provides a checklist to assess whether an AI system is complying with the principles of equality and non-discrimination. The checklist consists of several broad categories, with a focus on the deployment of AI technology in Europe. This includes heads such as direct discrimination, indirect discrimination, transparency, other types of equity claims, data protection, liability issues, and identification of the liable party. 

The list contains a series of questions which judges whether an AI system meets standards of equality, and identifies any potential biases it may have. For example, the question “Does the artificial intelligence system treat people differently because of a protected characteristic?” includes the parameters of both direct data and proxies. If the answer to the question is yes, the system would be identified as indulging in indirect bias. A similar checklist system, which has been contextualised for India, can be developed and employed in India’s regulatory framework for AI. 

Way forward

This post highlights some of the key aspects of the principles of Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. Integration of these principles which have been identified in the NITI Working Document into India’s regulatory framework requires that we first clearly define their content, scope and ambit to identify the right mechanisms to operationalise them. Given the absence of any exploration of the content of these AI principles or the mechanism for their implementation in India in the NITI Working Document, we have examined the relevant international literature surrounding the adoption of AI ethics and suggested mechanisms for their adoption. The NITI Working Document has spurred discussion around designing an effective regulatory framework for AI. However, these discussions are at a preliminary stage and there is a need to develop a far more nuanced proposal for a regulatory framework for AI.

Over the last week, India has hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and roadmap for social transformation, inclusion and empowerment through Responsible AI. As we discuss mechanisms for India to effectively harness the economic potential of AI, we also need to design an effective framework to address the massive regulatory challenges emerging from the deployment of AI—simultaneously, and not as an afterthought post-deployment. While a few of the RAISE sessions engaged with certain aspects of regulating AI, there still remains a need for extensive, continued public consultations with a cross section of stakeholders to embed principles for Responsible AI in the design of an effective AI regulatory framework for India. 

For a more detailed discussion on these principles and their integration into the Indian context, refer to our comments to the NITI Aayog here. 

Building an AI governance framework for India

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a “Working Document: Towards Responsible AI for All” (“NITI Working Document/Working Document”). The Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

The Working Document highlights the potential of Artificial Intelligence (“AI”) in the Indian context. It attempts to identify the challenges that will be faced in the adoption of AI and makes some recommendations on how to address these challenges. The Working Document emphasises the economic potential of the adoption of AI in boosting India’s annual growth rate, its potential for use in the social sector (‘AI for All’) and the potential for India to export relevant social sector products to other emerging economies (‘AI Garage’). 

However, this is not the first time that the NITI Aayog has discussed the large-scale adoption of AI in India. In 2018, the NITI Aayog released a discussion paper on the “National Strategy for Artificial Intelligence” (“National Strategy”). Building upon the National Strategy, the Working Document attempts to delineate ‘Principles for Responsible AI’ and identify relevant policy and governance recommendations. 

Any framework for the regulation of AI systems needs to be based on clear principles. The ‘Principles for Responsible AI’ identified by the Working Document include the principles of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and the protection and reinforcement of positive human values. While the NITI Working Document introduces these principles, it does not go into any substantive details on the regulatory approach that India should adopt and what the adoption of these principles into India’s regulatory framework would entail. 

In a series of posts, we will discuss the legal and regulatory implications of the proposed Working Document and more broadly discuss the regulatory approach India should adopt to AI and the principles India should embed in it. In this first post, we map out key considerations that should be kept in mind in order to develop a comprehensive regulatory regime to govern the adoption and deployment of AI systems in India. Subsequent posts will discuss the various ‘Principles for Responsible AI’, their constituent elements and how we should think of incorporating them into the Indian regulatory framework.

Approach to building an AI regulatory framework 

While the adoption of AI has several benefits, there are several potential harms and unintended risks if the technology is not assessed adequately for its alignment with India’s constitutional principles and its impact on the safety of individuals. Depending upon the nature and scope of the deployment of an AI system, its potential risks can include the discriminatory impact on vulnerable and marginalised communities, and material harms such as the negative impact on the health and safety of individuals. In the case of deployments by the State, risks include violation of the fundamental rights to equality, privacy, freedom of assembly and association, and freedom of speech and expression. 

We highlight some of the regulatory considerations that should be considered below:

Anchoring AI regulatory principles within the constitutional framework of India

The use of AI systems has raised concerns about their potential to violate multiple rights protected under the Indian Constitution such as the right against discrimination, the right to privacy, the right to freedom of speech and expression, the right to assemble peaceably and the right to freedom of association. Any regulatory framework put in place to govern the adoption and deployment of AI technology in India will have to be in consonance with its constitutional framework. While the NITI Working Document does refer to the idea of the prevailing morality of India and its relation to constitutional morality, it does not comprehensively address the idea of framing AI principles in compliance with India’s constitutional principles.

For instance, the government is seeking to acquire facial surveillance technology, and the National Strategy discusses the use of AI-powered surveillance applications by the government to predict crowd behaviour and for crowd management. The use of AI powered surveillance systems such as these needs to be balanced with their impact on an individual’s right to freedom of speech and expression, privacy and equality. Operational challenges surrounding accuracy and fairness in these systems raise further concerns. Considering the risks posed to the privacy of individuals, the deployment of these systems by the government, if at all, should only be done in specific contexts for a particular purpose and in compliance with the principles laid down by the Supreme Court in the Puttaswamy case.

In the context of AI’s potential to exacerbate discrimination, it would be relevant to discuss the State’s use of AI systems for the sentencing of criminals and assessing recidivism. AI systems are trained on existing datasets. These datasets tend to contain historically biased, unequal and discriminatory data. We have to be cognizant of the propensity for historical bias’ and discrimination getting imported into AI systems and their decision making. This could further reinforce and exacerbate the existing discrimination in the criminal justice system towards marginalised and vulnerable communities, and result in a potential violation of their fundamental rights.

The National Strategy acknowledges the presence of such biases and proposes a technical approach to reduce bias. While such attempts are appreciable in their efforts to rectify the situation and yield fairer outcomes, such an approach disregards the fact that these datasets are biased because they arise from a biased, unequal and discriminatory world. As we seek to build effective regulation to govern the use and deployment of AI systems, we have to remember that these are socio-technical systems that reflect the world around us and embed the biases, inequality and discrimination inherent in the Indian society. We have to keep this broader Indian social context in mind as we design AI systems and create regulatory frameworks to govern their deployment. 

While, the Working Document introduces the principles for responsible AI such as equality, inclusivity and non-discrimination, and privacy and security, there needs to be substantive discussion around incorporating these principles into India’s regulatory framework in consonance with constitutional guaranteed rights.

Regulatory Challenges in the adoption of AI in India

As India designs a regulatory framework to govern the adoption and deployment of AI systems, it is important that we keep the following in focus: 

  • Heightened threshold of responsibility for government or public sector deployment of AI systems

The EU is considering adopting a risk-based approach for regulation of AI, with heavier regulation for high-risk AI systems. The extent of risk factors such as safety, consumer rights and fundamental rights are assessed by looking at the sector of deployment and the intended use of the AI system. Similarly, India must consider the adoption of a higher regulatory threshold for the use of AI by at least government institutions, given their potential for impacting citizen’s rights. Government use of AI systems that have the potential of severely impacting citizens’ fundamental rights include the use of AI in the disbursal of government benefits, surveillance, law enforcement and judicial sentencing

  • Need for overarching principles based AI regulatory framework

Different sectoral regulators are currently evolving regulations to address the specific challenges posed by AI in their sector. While it is vital to harness the domain expertise of a sectoral regulator and encourage the development of sector-specific AI regulations, such piecemeal development of AI principles can lead to fragmentation in the overall approach to regulating AI in India. Therefore, to ensure uniformity in the approach to regulating AI systems across sectors, it is crucial to put in place a horizontal overarching principles-based framework. 

  • Adaptation of sectoral regulation to effectively regulate AI

In addition to an overarching regulatory framework which forms the basis for the regulation of AI, it is equally important to envisage how this framework would work with horizontal or sector-specific laws such as consumer protection law and the applicability of product liability to various AI systems. Traditionally consumer protection and product liability regulatory frameworks have been structured around fault-based claims. However, given the challenges concerning explainability and transparency of decision making by AI systems, it may be difficult to establish the presence of defects in products and, for an individual who has suffered harm, to provide the necessary evidence in court. Hence, consumer protection laws may have to be adapted to stay relevant in the context of AI systems. Even sectoral legislation regulating the use of motor vehicles, such as the Motor Vehicles Act, 1988 would have to be modified to enable and regulate the use of autonomous vehicles and other AI transport systems. 

  • Contextualising AI systems for both their safe development and use

To ensure the effective and safe use of AI systems, they have to be designed, adapted and trained on relevant datasets depending on the context in which they will be deployed. The Working Document envisages India being the AI Garage for 40% of the world – developing AI solutions in India which can then be deployed in other emerging economies. Additionally, India will likely import AI systems developed in countries such as the US, EU and China to be deployed within the Indian context. Both scenarios involve the use of AI systems in a context distinct from the one in which they have been developed. Without effectively contextualising socio-technical systems like AI systems to the environment they are to be deployed in, there are enhanced safety, accuracy and reliability concerns. Regulatory standards and processes need to be developed in India to ascertain the safe use and deployment of AI systems that have been developed in contexts that are distinct from the ones in which they will be deployed. 

The NITI Working Document is the first step towards an informed discussion on the adoption of a regulatory framework to govern AI technology in India. However, there is a great deal of work to be done. Any regulatory framework developed by India to govern AI must balance the benefits and risks of deploying AI, diminish the risk of any harm and have a consumer protection framework in place to adequately address any harm that may arise. Besides this, the regulatory framework must ensure that the deployment and use of AI systems are in consonance with India’s constitutional scheme.

[August 19-26] CCG’s Week in Review: Curated News in Information Law and Policy

The ECI sought a legal mandate to link Aadhaar with Voter IDs; Facebook approached the Supreme Court over PILs demanding Aadhaar linkage with social media accounts; MEITY invited ‘select stakeholders’ for private consultations over the data protection bill; and a new panel to review defence procurement practices in India was constituted by the Defense Minister Rajnath Singh, who also hinted at dropping India’s no first use policy – presenting this week’s most important developments in law and tech.       

Aadhaar

  • [Aug 19] EC seeks statutory baking to collect voters’ Aadhaar numbers, The Times of India report.
  • [Aug 19] Facebook approaches SC over Aadhaar linkage pleas, The Deccan Herald report; Firstpost report.
  • [Aug 20] Aadhaar to ensure farmers, not middlemen, get benefits, The Economic Times report.
  • [Aug 21] SC cautions govt on linking Aadhaar with social media, ET Tech report.
  • [Aug 21] Election Commission writes to law ministry, seeks legal powers to collect Aadhaar numbers for cleaning up voters’ list, Firstpost report.
  • [Aug 22] Aadhaar may be used to verify SECC beneficiaries, The Economic Times report.
  • [Aug 23] Centre to put QR code on fishermen’s Aadhaar cards to secure sea route: Amit Shah, The Times of India report.
  • [Aug 24] Aadhaar-social media linking: 10 things to know about the ongoing issue, India Today report.
  • [Aug 24] Govt to allow Aadhaar-based KYC for domestic retail investors; amendments to PMLA to be issues, Firstpost report.
  • [Aug 25] Linking Aadhaar with electoral rolls will create Delhi, Mumbai Analyticas: Justice Srikrishna, The Week report.

Digital India

  • [Aug 19] Indian companies at a disadvantage in tenders, says Commerce ministry, Money Control report; The Times of India report.
  • [Aug 21] India’s IT Industry turns to flexi staffing to keep its bench from idling, ET Tech report.
  • [Aug 22] Indian IT Firms step up patent filings as they look to monetize their IP, ET Tech report.
  • [Aug 26] Time to revisit FTAs to fire up electronics: Ravi Shankar Prasad, ET Rise report.

E-Commerce

  • [Aug 21] Government hopes for an Ecommerce GeM, ET Tech report.
  • [Aug 23] Technology reforming India’s retail businesses, ET Tech report.

Digital Payments

  • [Aug 22] RBI to allow e-mandates on card payments from September 1, Medianama report.
  • [Aug 22] Digital payment execs met Finance Ministry officials to discuss demerits of removing MDR: report, Medianama report.

Cryptocurrencies

  • [Aug 18] US lawmakers to visit Switzerland to discuss Facebook’s Libra, Cointelegraph report.
  • [Aug 19] Israeli Bitcoiners petition banks to disclose crypto policies, Cointelegraph report.
  • [Aug 21] Authorities seize crypto mining equipment from nuclear power plant in Ukraine, Coin Desk report.
  • [Aug 23] $100K Crypto donation to Amazon rainforest charity blocked by BitPay, Coin Desk report.

Internet Governance

  • [Aug 18] Google, Facebook, WhatsApp to be made more accountable under new rules, Financial Express report.

Data Protection 

  • [Aug 19] Google cuts some Android phone data for wireless carriers amid privacy concerns, The Hindustan Times report.
  • [Aug 20] MEITY privately seeks responses to fresh questions on the data protection bill from select stakeholders, Medianama report; ET Tech report; Business Standard report; Inc42 report.
  • [Aug 21] Google, Intel and Microsoft form data protection consortium, Engadget report; The Economic Times report.
  • [Aug 22] Govt working towards tabling data protection bill in winter session, Livemint report; The Economic Times report
  • [Aug 24] India needs to draw a distinction between personal and impersonal data: Ravi Shankar Prasad, Inc42 report.
  • [Aug 25] Data Protection Bill need of the hour, says Justice BN Srikrishna, Inc42 report.

Social Media

  • [Aug 19] Social media accounts need to be linked with Aadhaar to check fake news, SC told, Livemint report.
  • [Aug 20] Twitter and Facebook crack down on accounts linked to Chinese campaign against Hong Kong, The Guardian report; Defense One report.
  • [Aug 20] Facebook’s new tool lets you see which apps and websites tracked you, the New York Times report; ET Tech report.
  • [Aug 21] China cries foul over Facebook, Twitter block of fake accounts, ET Tech report.
  • [Aug 23] Facebook removes accounts linked to Myanmar military, Medianama report.

Freedom of Speech

  • [Aug 20] Islamic preacher Zakir Naik banned from giving public speeches in Malaysia, India Today report.
  • [Aug 20] Zakir Naik apologizes to Malaysians for racial remarks, India Today report.
  • [Aug 21] Shehla rashid spreading fake news tro incite violence in Jammu and Kashmir: Indian army, DNA India report.
  • [Aug 24] IAS Officer Kannan Gopinathan resigns over ‘lack of freedom of expression’, The Hindu report; Scroll.in report.
  • [Aug 24] From colonial era to today’s India, a visual history of national security laws used to crush dissent, Sroll.in report.

Internal Security: Status of J&K

  • [Aug 19] Kashmir: now for the legal battle, India Today report.
  • [Aug 20] Amit Shah meets NSA, IB Chief on J&K, NDTV report.
  • [Aug 21] Armed forces to get human rights and vigilance cell after Rajnath Singh approves restructure, News 18 report.
  • [Aug 23] Opposition leaders demand release of Mehbooba Mufti, Omar Abdullah, The Economic Times report.
  • [Aug 23] Blackout is collective punishment against people of J&K: UN Human Rights experts call on India to end communications shutdown, Medianama report.
  • [Aug 25] Amid massive clampdown, uneasy calm in volatile south Kashmir, The Tribune report.

Tech and Law Enforcement

  • [Aug 20] Flaws in cellphone evidence prompt review of 10,000 verdicts in Denmark, The New York Times report.
  • [Aug 21] Supreme Court directs Madras HC not to pass final order in WhatsApp traceability case, Entrackr report.
  • [Aug 21] Facebook, WhatsApp and the encryption dilemma – What India can learn from the rest of the world, CNBC TV 18 report.
  • [Aug 21] WhatsApp’s response to Dr. Kamakoti’s recommendation for traceability in WhatsApp, Medianama report.
  • [Aug 25] Curbs on Aadhaar data use delayed murder probe: Cops, Deccan Herald report.
  • [Aug 26] End-to-end encryption not essential to WhatsApp as a platform: Tamil Nadu Advocate General, Medianma report.

Tech and National Security

  • [Aug 18] New Panel to review defence procurement procedure to strengthen ‘Make in India’, Bharat Shakti report; Jane’s 360 report.
  • [Aug 18] RSS affiliate sees Chinese telecom firms as security risk for India, The Hindu report.
  • [Aug 18] Traders body calls for boycott of Chinese goods, seeks upto 500% import duty, Livemint report.
  • [Aug 19] India looks to acquire military equipment on lease amidst budget squeeze, Defence Aviation Post report.
  • [Aug 20] India, France likely to finalize roadmap for digital, cyber security cooperation, The Economic Times report; The Indian Express report.
  • [Aug 20] ‘Make in India’ Software Defined Radio: ‘Mother’ of all solutions for tactical communications of armed forces, Financial Express report.
  • [Aug 20] Need to reduce dependence on foreign manufacturers to modernise Indian Air Force, says defence minister Rajnath Singh, Firstpost report.
  • [Aug 20] Strike total at all 41 ordnance factories, say unions on day one, The Hindu Business Line report; Deccan Herald report.
  • [Aug 21] Government neglect my force HAL to crash land, Deccan Herald report.
  • [Aug 21] Cabinet Secretariat raps MoD, MEA for not involving NSA, The Economic Times report.
  • [Aug 21] Ajay Kumar appointed new Defence Secretary, The Economic Times report.
  • [Aug 21] Ordnance factories continue strike, MoD calls their products ‘high cost, low quality’, India.com report.
  • [Aug 24] It’s about national security: Arun Jaitley on how 2019 elections were different from 2014, India Today report.
  • [Aug 24] Gaganyaan: Russian space suits, French medicine for Indian astronauts? The Hindu report.
  • [Aug 24] Ordnance strike: Unions to take a call on Centre’s proposal on Aug 24, The Hindu Business Line report.
  • [Aug 25] Will India change its ‘No First Use’ policy? The Hindu report.
  • [Aug 21] French Eye: india to launch 8-10 satellites with France as part of a ‘constellation’ for maritime surveillance, The Pioneer report.

Cybersecurity

  • [Aug 19] Global Cyber Alliance launches cybersecurity development platform for Internet of Things (IoT) Devices, Dark reading report.
  • [Aug 19] The US Army is struggling to staff its cyber units: GAO, Defense One report.
  • [Aug 20] A huge ransomware attack messes with Texas, Wired report.
  • [Aug 21] Experts call for cybersecurity cooperation at the Beijing Cybersecurity Conference, Xinhua News report.
  • [Aug 23] Enterprises are increasingly adopting AI, ML in cybersecurity: Experts, Livemint report.
  • [Aug 24] Anomaly detection as advanced cybersecurity strategy, iHLS report.
  • Aug 24] Telangana preparing an army of cyber warriors, Telangana Today report.

Internet of Things

  • [Aug 22] ITI-Bhubaneswar introduces Internet of Things curriculum, The New Indian Express report.
  • [Aug 22] Will We Ever Have A Full Industrial Internet Of Things, Forbes report.
  • [Aug 24] IKEA Smart Home Investment Could Be Boost The Internet Of Things Needs, Forbes report.

Artificial Intelligence and Emerging Tech

  • [Aug 20] Use artificial intelligence for tax compliance: Direct tax panel, Business Standard report.
  • [Aug 20] Artificial intelligence and the world of tax litigation, Financial Express report.
  • [Aug 21] Intel launches first artificial intelligence chip Springhill, The Hindu report; News18 report.
  • [Aug 22] Facial recognition attendance systems for teachers to be installed in Gujarat’s govt schools, Medianama report.
  • [Aug 26] Yogi Govt Plans To Install Artificial Intelligence System In 12,500 Public Sector Buses, Business World report.

Telecom/5G

  • [Aug 24] PMO clears BSNL-MTNL revival, merger off the table, The Economic Times report.

Huawei

  • [Aug 19] Trump reiterates Huawei as ‘national security threat’, Cnet report.
  • [Aug 20] Tech giant Huawei slams US administration, calls sanctions politically motivated, India Today report.
  • [Aug 20] US sanctions on Huawei bite, but who gets hurt? Livemint report.
  • [Aug 21] Huawei founder tells staff it faces ‘live or die’ moment, Tech Radar report.
  • [Aug 22] Aadhaar-Social Media linking case: Next SC hearing to take place on 13 September, Firstpost report.
  • [Aug 22] China telcos weigh sharing 5G network to cut costs, potentially hurting Huawei, Reuters report.
  • [Aug 23] Huawei puts a price for Trump’s moves: $10 billion, The Hindu Business Line report
  • [Aug 25] trump, UK’s Johnson discuss Huawei on G7 sidelines, Reuters report.

Opinions and Analyses

  • [Aug 19] Vishal Krishna, Your Story, Data privacy is a fundamental right, but is the Indian startup ecosystem prepared for new protection law?
  • [Aug 19] Nitin Pai, Livemint, Appointing a chief of defence staff would just be the first step.
  • [Aug 19] Priyanjali Malik, The Hindu,  An intervention that leads to more questions.
  • [Aug 19] Abhijit Iyer-Mitra, The Print, India needs tips from Israel on how to handle Kashmir. Blocking network is not one of them.
  • [Aug 19] Ria Singh Sawhney, The Wire, Aadhaar: A Primer to knowing your rights.
  • [Aug 19] Alok Deb, Institute for Defence Studies and Analysis, Finally a CDS for the Indian Armed Forces.
  • [Aug 20] TOI Editorial, Aadhaar Hydra again: EC wants to link voter roll to Aadhaar data, this is unnecessary and risky.
  • [Aug 20] Lt. Gen. Harwant Singh (Retd), The Economic Times, A CDS for the armed forces must come with full play.
  • [Aug 20] Darren Death, Forbes, Is cybersecurity automation the future?
  • [Aug 20] Asit Ranjan Mishra, Livemint, Why New Delhi is turning up the heat on PoK now. 
  • [Aug 21] Financial Express Opinion, Election ID linked to Aadhaar can make votes portable.
  • [Aug 21] Nabeel Ahmed, Read Write, Artificial Intelligence: A tool or a threat to cybersecurity?
  • [Aug 21] Asheeta Regidi, Firstpost, Aadhaar-social media account linking could result in creation of a surveillance state, deprive fundamental right to privacy.
  • [Aug 22] Sanjay Hegde, The Hindu, Sacrificing liberty for national security.
  • [Aug 22] Ahmed Ali Fayyaz, The Quint, Fall of J&K: Real reason – ‘Jamhooriyat, Insaniyat, Kashmiriyat’?
  • [Aug 22] AS Dulat, The Telegraph, Kashmir: The perils of a muscular approach. 
  • [Aug 22] Somnath Mukherjee, The Economic Times, Growth is the biggest national security issue.
  • [Aug 22] K Raveendran, The Leaflet, Aadhaar-social media profile linkage will open pandora’s box.
  • [Aug 22] Mariarosaria Taddeo and Francesca Bosco, World Economic Forum blog, We must treat cybersecurity as a public good, here’s why.
  • [Aug 23] Nikhil Pahwa, Medianama, Against Facebook-Aadhaar linking.
  • [Aug 23] Ilker Koksal, Forbes, The rise of crypto as payment currency.
  • [Aug 24] Kalev Lataru, Forbes, Social media platforms will increasingly define ‘truth’.
  • [Aug 25] Sandeep Unnithan, India Today, South block.
  • [Aug 25] Spy’s Eye, Outlook, Intel agencies need strengthening.
  • [Aug 26] Prasanna S., The Hindu, Privacy no longer supreme.
  • [Aug 26] Sunil Abraham, Business Standard, Linking Aadhaar with social media or ending encryption is counterproductive. 
  • [Aug 26] The Financial Express Opinion, Linking social media to Aahdaar is serious overkill. 

[August 12-19] CCG’s Week in Review: Curated News in Information Law and Policy

This week PM Modi called for the creation of a Chief of Defence staff in his Independence Day Speech; Internet and Communications Shutdown continue in Jammu and Kashmir after a brief reprieve amid fears of violence; Rajasthan faces Internet shutdowns in parts of Jaipur following communal clash; China’s new digital currency is almost up for launch; Maharashtra adopts wider use of blockchain technology – presenting this week’s most important developments in law and tech.   

Internet Shutdown

  • [Aug 13] Jaipur: 24 people injured in communal clash, mobile internet suspended in some areas, Scroll report; Hindustan Times report.
  • [Aug 13] Internet services partially restored in Jammu & Kashmir: Report, Medianama report.
  • [Aug 17] Landline services partially restored in Valley, mobile internet back in 5 Jammu districts, Hindustan Times report.
  • [Aug 18] Mobile Internet services again snapped in Jammu region, The Times of India report.

Internal Security: Current Status of J&K

  • [Aug 18] Ex-defence officers and bureaucrats move SC against Centre’s decision on Article 370, The Economic Times report; The Quint report.
  • [Aug 18] The fear of losing land to outsiders grips Valley, The Tribune report.
  • [Aug 18] 4G internet services to be made operational only after assessing situation: Jammu Divisional Commissioner, ANI News report.
  • [Aug 18] Restrictions Reimposed in Srinagar After Protests, Clashes With Police, The Wire report; the Statesman report.
  • [Aug 18] About 4,000 people arrested in Kashmir since August 5: govt sources to AFP, The Hindu report.
  • [Aug 19] Schools Reopen In Kashmir But Few Children Show Up: 10 Points, NDTV report.
  • [Aug 19] Congress Leaders Deviate From Party Line on Article 370, Hail Centre’s Move in Kashmir, The Wire report.

Tech and National Security

  • [Aug 15] Independence Day 2019: Here are the highlights from PM Modi’s speech, Business Line report.
  • [Aug 15] Major push for military, infra in PM Modi’s I-Day speech: Key takeaways, Business Standard report.
  • [Aug 16] Explained: What is Chief of Defence Staff that PM Modi announced in I-Day speech, India Today report.
  • [Aug 17] Appointment of CDS will boost India’s national security and power projection capabilities, Economic Times report.

Aadhaar

  • [Aug 16] Amend existing laws so we can obtain and use people’s Aadhaar numbers for voter verification, says ECI to Law ministry, Medinama report.
  • [Aug 18] EC seeks legal backing to collect voters’ Aadhaar data to check duplication, Business Standard report.
  • [Aug 19] ‘Voluntary Aadhaar eKYC for bank accounts, mobile, MFs soon’, Times of India report.
  • [Aug 19] SC to consider Facebook’s Aadhaar plea on August 20, Deccan Herald report.

Digital India

  • [Aug 16] India’s battle to catch China in payment apps; country to become more digital, Livemint report.
  • [Aug 17] Top executives of Apple, Samsung, Xiaomi to meet government on August 19, Financial Express report.
  • [Aug 19] Lithuania can be important technology partner for India: Venkaiah Naidu, MoneyControl report.
  • [Aug 19] India to help Bhutan in digital payments, space technology, Hindustan Times report.

Cybersecurity

  • [Aug 13] NIST seeks industry feedback as Internet of Things cybersecurity standards take shape, Federal News Network report.
  • [Aug 14] Cybersecurity Startup Securiti.ai emerges from Stealth with $31 million Investment, ciscomag.com report.
  • [Aug 15] Kaspersky announces new Transparency Centre in Malaysia, Livemint report; Digital News Asia report.
  • [Aug 19] Cybersecurity leader Vectra establishes operations in Asia-Pacific to address growing demand for network detection and response in the cloud, Yahoo Finance report.

Emerging Technology

  • [Aug 14] Huawei starts research on 6G internet, Cnet report.
  • [Aug 14] Google’s soccer-playing A.I. hopes to master the world’s most popular sport, Digital Trends report.
  • [Aug 19] Three UK rolls out 5G home internet access in London, Engadget report.
  • [Aug 19] Cyber Security: Are IoT deployments in India safe from hackers?, Financial Express report.

Blockchain

  • [Aug 12] Reliance AGM: Mukesh Ambani backs blockchain technology; says data is wealth, Business Today report.
  • [Aug 19] Maharashtra to use blockchain technology in agriculture marketing, vehicle registration, DNA report; Cointelegraph report.

Cryptocurrency

  • [Aug 13] China says state Cryptocurrency set to rival Bitcoin is ‘Close’ to Launch, The Independent report.
  • [Aug 14] China’s Digital Currency Is Unlikely to Be a Cryptocurrency, Forbes report.
  • [Aug 14] IAMAI says RBI has no authority to ban cryptocurrencies, The Economic Times report.
  • [Aug 16] Chinese National Cryptocurrency Turns Out Not Being an Actual Crypto, Cointelegraph report.
  • [Aug 17] Cryptocurrency This Week: Supreme Court To Conclude Hearing Next Week, Bitcoin India Investigations And More, Inc42 report.
  • [Aug 17] Mastercard is assembling its own cryptocurrency team, New York Post report.
  • [Aug 18] Facebook’s Calibra cryptocurrency wallet already has competition, Cnet report.
  • [Aug 19] Silvergate Bank Plans to Offer Cryptocurrency-Collateralized Loans, Cointelegraph report.
  • [Aug 19] Binance planning to launch ‘Venus,’ similar to Facebook’s upcoming cryptocurrency Libra, Theblockcrypto.com report.

Huawei

  • [Aug 17] US set to give Huawei another 90 days to buy from American firms, Khaleej Times report.
  • [Aug 18] Trump To Suddenly Throw Lifeline To Huawei, Report Says, Saving Huawei Mate 30 Pro, Forbes report.
  • [Aug 18] Huawei to Trump: “You don’t want us to fight Google.”, EsquireME report.
  • [Aug 19] Trump: ‘I don’t want to do business with Huawei’, AlJazeera report.

Artificial Intelligence

  • [Aug 13] AI fights banana disease, Fruitnet.com report.
  • [Aug 13] TCS’ AI platform Ignio tops $60m revenue mark, The Economic Times report.
  • [Aug 14] Zensar Technologies places its bets on artificial intelligence, The Economic Times report.
  • [Aug 14] Ola ‘acquihires’ artificial intelligence start-up Pikup.ai, Financial Express report.
  • [Aug 14] Wipro launches edge artificial intelligence solutions powered by Intel, Dataquest report; Moneycontrol report.
  • [Aug 15] IT big 3 to offer artificial intelligence as a platform, The Economic Times report.
  • [Aug 15] Google Assistant tops 2019 digital assistance IQ test, but every AI posts gains, Venturebeat report.
  • [Aug 16] Google brings AI to studying with Socratic, ZDNet report.
  • [Aug 16] [Funding alert] AI startup Orbo.ai raises $1.6M from YourNest Ventures, Venture Catalysts, YourStory report.
  • [Aug 16] [Funding alert] Retail AI startup SprintAI raises $500,000 from InMobi co-founder, others, YourStory report.
  • [Aug 18] Small towns in India are powering the global race for artificial intelligence, The Economic Times report.
  • [Aug 18] New consortium aims to make Bengaluru hub for industrial AI, Livemint report.

Surveillance

  • [Aug 16] Data Leviathan: China’s Burgeoning Surveillance State, Human Rights Watch report.
  • [Aug 16] Uganda spends US$126 million on surveillance system with facial recognition from Huawei, South China Morning Post report.
  • [Aug 16] Trump administration reportedly wants to extend NSA phone surveillance program, Cnet report.
  • [Aug 17] White House is pushing to reauthorize law that allows surveillance on Americans’ phone records, salon.com report.

Data Privacy and Protection

  • [Aug 15] How Data Privacy Laws Can Fight Fake News, JustSecurity.org report.
  • [Aug 16] What The Great Hack tells us about data privacy, Livemint report.
  • [Aug 17] Facebook’s Bizarre Response To Privacy Scandals? New Pop-Up Cafés, Forbes report.
  • [Aug 18] Why Blockchain Technology is Important for Data Privacy, Bitcoinist.com report.

E-Commerce

  • [Aug 15] Alibaba results beat estimates on cloud, e-commerce growth, Economic Times report.
  • [Aug 16] E-commerce sector looks to self-regulate, Flipkart, Amazon not on board, CNBCTV 18 report.
  • [Aug 18] Fintech firm Suvidhaa plans e-commerce foray, Economic Times report.
  • [Aug 19] Why social e-commerce is set to become the next big thing in China, TechWire Asia report.
  • [Aug 19] China e-commerce sites block sales of protest gear to Hong Kong, The Japan Times report.

Opinions and Analyses

  • [Aug 13] Editorial, The Guardian, The Guardian view on surveillance: Big Brother is not the only watcher now.
  • [Aug 13] Anand Venkatanarayanan, Medianama, Dr Kamakoti’s solution for WhatsApp traceability without breaking encryption is erroneous and not feasible.
  • [Aug 14] Gladys Kong, Forbes, Why Stricter Data Privacy Laws Would Benefit The Data Industry.
  • [Aug 15] Tuhina Joshi and Vijayant Singh, Yourstory, Independence Day: When will India be free of data privacy issues?
  • [Aug 15] Shrija Agrawal, Livemint, ‘Cashless’ and the politics of innovation.
  • [Aug 15] Jeffrey Ton, Forbes, The Skeptic’s Guide To Assessing Artificial Intelligence.
  • [Aug 15] Matt Ocko and Alan Cohen, TechCrunch, Artificial intelligence can contribute to a safer world.
  • [Aug 15] Jaclyn Jaeger, Compliance Week, Data privacy vs. national security: Moving the conversation forward.
  • [Aug 15] Jinoy Jose P, Business Line, The Cheatsheet: How Internet shutdowns hurt the economy.
  • [Aug 15] Editorial, Hindustan Times, India’s tryst with freedom.
  • [Aug 16] Editorial, The Hindu, Words and deeds: On Modi’s I-Day vision.
  • [Aug 16] Editorial. The Indian Express, Injustice system.
  • [Aug 17] Guillermo M. Luz, The Inquirer, Apec’s new data privacy rules.
  • [Aug 17] Pravin Sawhney, The Tribune, Where does CDS fit in?
  • [Aug 18] Editorial, Hindustan Times, Fashioning India’s nuclear posture.
  • [Aug 18] Enrique Dans, Forbes, Will China’s New Cryptocurrency Make Virtual Cash Respectable?
  • [Aug 18] Stephanie Hare, The Guardian, Facial recognition is now rampant. The implications for our freedom are chilling.
  • [Aug 19] Thomas Hemphill, NWI Times, GUEST COMMENTARY: Artificial Intelligence and the antitrust challenges.
  • [Aug 19] Ravi Shankar Prasad, The Indian Express, Valley’s new dawn: An era of development and inclusion beckons.
  • [Aug 19] Ahmed Ali Fayyaz, The Quint, Massacre of J&K Laws and Constitution but No Blood on the Streets.

[June 24-July 1] CCG’s Week in Review: Curated News in Information Law and Policy

With the G20 Summit in Osaka easing trade tensions between the US and China, India has seen a week of developing policy positions on data localisation, tech startups and the fate of telecom providers  — presenting this week’s most important developments in law and tech.

Aadhaar 

  • [June 24] New Aadhaar regulations by Government to bring it back in full force, will be voluntary, India Today report.
  • [June 24] Aadhaar bill introduced amid opposition protests, The Hindu report; Live Mint report.
  • [June 25] Bill on Aadhaar tabled, ‘doesn’t violate privacy’, The Tribune report.
  • [June 29] One nation, One ration card’ scheme from July 1, 2020, The Hindu report.

Internet Shutdown

  • [June 28] Myanmar: Internet Shutdown Risks Lives, Human Rights Watch report.

Free Speech

  • [June 27] Modi Government stops advertising in The Times Group, The Hindu and The Telegraph news paper, PGurus report;
  • [June 28] Modi Government freezes ads placed in Times of India, The Hindu and The Telegraph, The Wire report; The Deccan Herald report.

Data Protection

  • [June 24] UAE data protection law, similar to GDPR, likely landing this year, Tech Radar report.
  • [June 27] Home, IT Ministries discuss data protection bill, The Hindu report.
  • [June 27] National Centre being planned to hold and manage all public data, The Economic Times report; The Quint report.

Data Localisation

  • [June 26] Data storage rules out of e-commerce policy, Live Mint report; CNBC TV18 report; Medianama report.
  • [June 26] Government to decide if data issues need to be out of e-commerce policy, The Economic Times report.
  • [June 27] Personal data storage: Government not in a mood to dilute data localisation rule, Financial Express report.
  • [June 28] India stands by data localisation at G-20 summit, US opposes it, ETtech report.
  • [June 28] Data Localisation: What India can dictate, and what she can’t, Business Today report.
  • [June 29] India surrenders to US pressure on ‘data localisation’: Piyush Goyal and RBI send out confusing signals, National Herald report.

E-Commerce

  • [June 26] Not ready for global e-tail rules, India to tell G-20, ET Tech report.
  • [June 26] India to come out with national e-comm policy within 12 months: Piyush Goyal, ETtech report.
  • [June 27] The E-Commerce disaster that never was, The Economic Times report.
  • [June 30] Draft e-comm policy, data protection may figure at India-EU meet in Brussels on July 4, Your Story report.

Digital India

  • [June 26] Digital literacy drive needs more funds: MeitY, The Hindu Business Line report.
  • [June 27] RBI Committee recommends setting up Universal Enterprise ID, linkage across individual and enterprise PAN, Medianama report.
  • [June 27] MeitY expenditure under Digital India at Rs. 3,328 cr in 2018-19: Ravi Shankar Prasad, The Economic Times report.
  • [June 27] SEBI approves DVRs for tech startups, ETtech report
  • [June 28] Indian startups cheer differential voting rights, ETtech report.
  • [June 28] Nitin Gadkari proposed Alibaba like platform for MSME sector, Entrackr report.
  • [June 30] India’s OTT market will grow at 21.8% CAGR, PwC report says, ET Telecom report.

Cybersecurity

  • [June 25] Digital India’s response readiness against cyber attacks is frail, lack of online security awareness biggest weakness, Firstpost report.
  • [June 27] To avoid Huawei like situation, India plans desi WhatsApp for official communication ET Telecom report; Medianama’s take.
  • [June 28] Infosys cyber security unit sees rising demand as threats mount, ET Tech report.
  • [June 29] US FDA warns of cybersecurity risk to certain Medtronic insulin pumps. Live Mint report.
  • [June 29] Indian manufacturing industry at high cyber risk, The Asian Age report.
  • [June 29] Average DNS attack cost rises by 19% to $814,150 in APAC, ET Telecom report.
  • [July 1] As cyber attacks increase, Indian IT clients seek stricter contracts, more audits, ET Tech report.
  • [July 1] China increases cybersecurity industry development, Global Times report.

Telecom/5G

  • [June 24] DoT may move SC against Airtel, Tata Tele merger, The Economic Times report.
  • [June 26] Broadband forum seeks lower 5G spectrum prices, change in auction design, ET Telecom report.
  • [June 26] Government gets 6 proposals for 5G trials, including Huawei, The Economic Times report.
  • [June 27] Lack of 4G, staff costs slow down BSNL, MTNL, ET Telecom report.
  • [June 27] DoT, MeitY eye EU ‘toolbox’ to address 5G-related concerns, The Economic Times report.
  • [June 28] Telcos’ health needs to be vetted before 5G pricing, ET Tech report.
  • [June 28] Modi, Trump discuss India-US collaboration in 5G tech, ET Tech report.
  • [June 30] DoT will move cabinet with BSNL, MTNL package: Ravi Shankar Prasad, ET Telecom report.
  • [July 1] Ericsson, Nokia assure telcos of speedy 5G rollout, ET Telecom report.

More on Huawei

  • [June 24] Huawei offers to sign ‘no-backdoor’ pact with India govt, telcos to underline security commitment: CEO, The Economic Times report.
  • [June 24] UK’s approach to Huawei is flawed warns Ericsson’s US boss, Financial Times report.
  • [June 26] Nokia warns UK over using rival Huawei’s 5G kit, BBC report.
  • [June 26] Huawei claims 50 commercial 5G deals globally amid uncertainty in India, ET Telecom report.
  • [June 26] US-China trade war in 10 dates, ET Telecom report.
  • [June 26], Huawei’s telecom equipment is more likely to have flaws than rivals’ claims report by US cybersecurity company Finite State, Tech Radar report.
  • [June 26] India, US to discuss Huawei’s role in 5G trials, The Economic Times report.
  • [June 26] India may let Huawei conduct 5G trials defying trump ban, International Business Times report.
  • [June 27] Tech companies find legal ways around Huawei blacklist, ET Telecom report.
  • [June 27] Medianama’s Huawei roundup: Promise of ‘no backdoor’ pact, 5G trial proposal and more.
  • [June 27] Huawei personnel worked with China’s military on research projects, The Economic Times report.
  • [June 28] How the Huawei issue affects India, The Economic Times report.
  • [June 28] Nokia’s CTO slams Huawei after ‘potential backdoors’ found in 55% of its devices, Forbes report.
  • [June 29] Nokia distances itself from CTO comments about Huawei, Tech Radar report.
  • [June 28] As Trump and Xi talk trade. Huawei will loom large, The New York Times report.
  • [June 29] Trump surprises G20 with Huawei concession: US companies can sell to Huawei, Forbes report; Live Mint report.
  • [June 30] Huawei lifeline shows Trump prefers business deals over cold war, Live Mint report.
  • [July 1] Google gets nod to license Android for Huawei, ET Telecom report.

Big Tech

  • [June 26] Google accused of working to prevent Trump return in 2020, ET Tech report.
  • [June 26] Next Google Chrome browser extension to increase data protection, Mediapost report; Forbes report, Medianama report.
  • [June 26] Google and University of Chicago sued over data sharing, The New York Times report.
  • [June 28] Italy stings Facebook with $1.1 million fine for Cambridge Analytica data misuse, TechCrunch report.
  • [June 29] Google may have leveraged Android unfairly, says CCI, Inc42 report.
  • [June 30] Spotify leaking use data with music labels: Report, ET Telecom report.

Emerging Tech/ AI

  • [June 22] White House updated national artificial intelligence strategy, Defense One report.
  • [June 24] Tech Mahindra unveils AI-based Humanoid HR for its Noida facility, Business Standard report.
  • [June 25] IT firms are buying niche firms to grow in emerging areas like IOT, ET Tech report.
  • [June 27] Commission-appointed panel publishes recommendations for artificial intelligence research. Science business report. [Read the full report of the High Level Expert Group appointed by the European Commission here].

Cryptocurrencies

  • [June 26] Facebook’s crypto currency faces pre-G20 examination, Livemint report.
  • [June 27] Crypto exchange Koinex wraps up operations; laments apathy of regulators and banks, Entrackr report, ETtech report, Medianama report.

Tech and Law Enforcement

  • [June 25] Government surveillance at alarming levels: Forrester Global Map of Privacy Rights and Regulations, The Hindu Business Line report.
  • [June 28] Madras HC allows Internet freedom Foundation (IFF) to intervene to oppose plea for linking Aadhaar with social media accounts, Live Law report; Medianama report.
  • [June 29] Have social media companies helped TN Police combat cyber crimes? Affidavit in Madras HC has answers, Bar&Bench report
  • [June 30] Railway police mulls using Aadhaar to track unidentified victims, The Hindu report; Times of India report.
  • [June 30] War on fake news: Government to roll out guidelines for Google, Facebook, WhatsApp, ISPs, Financial Express report

Tech and Military

  • [June 24] Indian MoD approves procurement of 10 more P-8I aircarft for Indian navy, Jane’s Defense weekly report.
  • [June 26] Russia wants to fool its enemies by making its drones look like owls (Uri style), Newsweek report.
  • [June 26] Tatas may soon join queue for drone clearance from DGCA, ET Tech report.
  • [June 26] 41 pieces of space debris from India’s asat test still in orbit, six weeks after they were supposed to decay according to Harvard astronomer, The Independent report
  • [June 27] Defence minister to review services’ emergency weapon acquisitions for war preparedness, India Today report.
  • [June 27] India progresses import substitution projects under Make-II category, Jane’s Defence Weekly report.
  • [June 28] S-400 Triumf missile deal: India mulls Euro payments for Russian arms to escape US sanctions, Business Today report.
  • [June 28] Euro payment won’t end possibility of US sanctions on S-400 deal: Ex-Indian Defence Adviser, Sputnik International report.
  • [June 30] India signs Rs. 200 crore anti-tank missile deal with Russia, Live Mint report.

Opinions and Analyses

  • [June 23] Julian Vigo, Forbes, How cybersecurity is turning users into security experts.
  • [June 23] Nitin Pai, Live Mint, What it would take for India to become a proper space power,
  • [June 24] Satya Prakash, The Tribune, SC’s flip-flop on free speech.
  • [June 24] Tim Cushing, Tech Dirt (blog), Indian Government uses national security law, bad information to block twitter accounts all over the world.
  • [June 24] Michael Schmitt, Just Security (blog) Top Expert Backgrounder: Aborted US strike, cyber operations against Iran and International Law
  • [June 25] Rohan Choudhary, The Telegraph India, India’s strategic challenges in the near future will be naval, not continental
  • [June 25] Bryan Menegus, Gizmodo, This is how you’re being manipulated (by Big Tech)
  • [June 25] Prannv Dhawan and John Simte, Firstpost, Combating fake news: MEITY should allow legal-flexibility based on social media platforms over one-size-fits-all regulatory approach.
  • [June 25] Alon Muroch, Forbes, What’s preventing crypto from going mainstream?
  • [June 25] Geoffrey S Corn, Lawfare, The aborted Iran strike: the fine line between necessity and revenge,
  • [June 25]  Justin Sherman and Robert Morgus, Lawfare, The Confused US messaging campaign on Huawei.
  • [June 25] Live Mint Opinion, Accept Huawei’s offer.
  • [June 26] Sujan R Chinoy, IDSA Comment, Indo-US Defence Partnership: Future prospects.
  • [June 26] Lt Gen (Retd) DS Hooda, News18, India’s New Defence Cyber Agency Will have to Work Around Stovepipes Built by Army, Navy and Airforce
  • [June 26] Goutam Das and Sonal Khetrapal, Business Today,  Does the amended Aadhaar Bill circumvent Supreme Court’s order?
  • [June 26] Cameron F Kerry and Jon B Morris, The Brookings Institute, Why data ownership is the wrong approach to protecting privacy,
  • [June 27] Anirudh Rastogi and Ramani Ramachandran, ET Tech, RBI needs to overcome its fear of legitimate cryptocurrency usage
  • [June 27] Anrnab Dey, The Quint, What is India doing with all its big data?
  • [June 28] Samiksha Goel, Deccan Herald, 5G world will need new cybersecurity approach.
  • [June 28] Pravin Sawhney, The Wire, Nobody Should be surprised if this year’s defence budget hardly sees a boost.
  • [June 28] PK Vasudeva, The Statesman, A three-in-one position.
  • [June 28] Suhasini Haidar, The Hindu, At G20, India stands with developing world- not US, Japan- on 5G and data.
  • [June 29] Robert Farley, The Diplomat, Getting the world to comply with the US Huawei ban won’t be easy.
  • [June 28] Sidhant Kumar, The Economic Times (blog), Protection, or protectionist?
  • [June 29] Siddharth Sivaraman, The Sunday Guardian, Industrial development is a must for defence modernisation.
  • [June 30] Ephrat Livni, Quartz, Facebook’s Libra is spurring central banks’ interest in issuing crytpocurrency.
  • [June 30] Kamal Davar, The Sentinnel/ The Quint,  Security Agenda for Modi 2.0: A national security doctrine
  • [June 30] Arpita Mukherjee, The Pioneer, India should join e-commerce talks at WTO.
  • [June 30] Keshav Murugesh, The Hindu Business Line, Getting to the Digital Summit.
  • [June 30] Shobha Gupta, Bar & Bench, Section 66A: When a celebrated judgment of the Supreme Court cannot be implemented by the police.
  • [June 30] Nishant Sirohi, The Leaflet, AI Technologies: Putting human rights in the forefront.
  • [June 30] Eline Chivot, Financial Times, One year on, GDPR needs a reality check.
  • [June 29] Robert D Atkinson, Information Technology and Innovation Foundation, Trump was right to (Temporarily) Lift the Huawei Export Ban.
  • [June 30] Samir Saran and Richard Rahul Verma, The Print, Here’s how US and India can be major defence partners and take lead over Russia and China.

India’s Artificial Intelligence Roadmap

By Aditya Singh Chawla

There is now a near universal perception that Artificial Intelligence technologies are set to disrupt every sphere of life. However, this is coupled with concern regarding the social, ethical (and even existential) challenges that AI might present. As a consequence, there has been an uptake in interest by governments on how best to marshal the development of these technologies. The United Kingdom, the United States, China, and France, among others, have all released vision documents that explore these themes.

This post, the first in a series, presents a brief overview of such initiatives by the Indian government. Subsequent posts will focus specifically on their treatment of personal data, as well as their consideration of ethical issues posed by AI.

~

Task Force on Artificial Intelligence

In August 2017, the Ministry of Commerce and Industry set up a ‘Task Force on Artificial Intelligence for India’s Economic Transformation’. A panel of 18 members was formed with the objective of exploring how Artificial Intelligence could be best deployed in India.

The Task Force released its Report in May 2018, where it characterized AI as a ‘socio-economic problem solver at a large scale’, rather than simply a booster for economic growth. It sought to explore domains which would benefit from government intervention, with the objective of improving quality of life, and generating employment. The report identifies 10 sectors where AI could be deployed – Manufacturing, FinTech, Healthcare, Agriculture and Food Processing, Retail, Accessibility Technology, Environment, National Security and Public Utility Services. It attempts to identify challenges specific to each sector, as well as enabling factors that could promote the adoption of AI.

The report also explores the predicted impact of AI on employment, as well as other broader social and ethical implications of the technology. It concludes with a set of recommendations for the government of India. A primary recommendation is to constitute an Inter-Ministerial National Artificial Intelligence Mission (N-AIM) with a 5 year budget of Rs. 1200 Crores. Other recommendations focus on creating an ecosystem for better availability of data for AI applications; skilling and education initiatives focused on AI; standard setting, as well as international participation in standard setting processes.

NITI Aayog’s National Strategy for Artificial Intelligence

In his Budget speech, the Finance Minister had tasked the NITI Aayog with formulating a national programme for Artificial Intelligence. In June 2018, the NITI Aayog released its roadmap in the form of the National Strategy for Artificial in India.

The paper frames India’s AI ambitions in terms of increasing economic growth, social development, and as an incubator for technology that can cater to other emerging economies. It focuses on 5 sectors as avenues for AI led intervention. These are healthcare, agriculture, education, smart cities, and smart mobility. It also identifies some key challenges to the effective adoption of AI. These include low awareness, research, and expertise in AI along with an absence of collaboration; the lack of ecosystems that enable access to usable data; high resource costs; and ill-adapted regulations.

The paper then presents a series of recommendations to address some of these issues. In order to expand AI research in India, it proposes a two-tier framework to focus on basic research as well as application based research. It also proposes the creation of a common computing platform in order to pool cloud infrastructure, and reduce infrastructural requirements for such institutions. It further suggests a review of the intellectual property framework to enable greater AI innovation. In order to foster international collaboration, the paper proposes the creation of a supranational CERN-like entity for AI. It also recommends skilling and education initiatives to address job creation, as well as the current lack of AI expertise. In order to accelerate adoption, it proposes a platform for sharing government datasets, along with a marketplace model for data collection and aggregation, for data annotation, as well as for deployable AI models.

The paper concludes with its recommendations for ‘responsible’ AI development. It recommends that there be a consortium of the Ethics Councils at each of the AI research institutions. It further proposes the creation of a Centre for Studies on Technology Sustainability. It also emphasizes the importance of fostering research on privacy preserving technology, along with general and sectoral privacy regulations.

Further reports suggest that a task force will be set up to execute the proposals that have been made, in coordination with the relevant ministries.

MeitY Committees

It has also been reported that four committees have been constituted in February 2018 to deliberate on issues of ‘data for AI, applications of AI, skilling and cyber security/legal, ethical issues.’ However, there have been no reports about when the committees will present their recommendations, and  whether they will be made available to the public.

~

India appears to be at the nascent stage of formulating its approach towards Artificial Intelligence. Even so, it is encouraging that the government recognizes the importance of its stewardship. Purely market led development of AI could imply all of its disruption, without any of the envisaged social benefits.

Aditya is an Analyst at the Centre for Communication Governance at National Law University Delhi