Building an AI Governance Framework for India, Part II

Embedding Principles of Safety, Equality and Non-Discrimination

This post has been authored by Jhalak M. Kakkar and Nidhi Singh

In July 2020, the NITI Aayog released a draft Working Document entitled “Towards Responsible AI for All” (hereafter ‘NITI Working Document’ or ‘Working Document’). This Working Document was initially prepared for an expert consultation held on 21 July 2020. It was later released for comments by stakeholders on the development of a ‘Responsible AI’ policy in India. CCG responded with comments to the Working Document, and our analysis can be accessed here.

In our previous post on building an AI governance framework for India, we discussed the legal and regulatory implications of the proposed Working Document and argued that India’s approach to regulating AI should be (1) firmly grounded in its Constitutional framework and (2) based on clearly articulated overarching principles. While the NITI Working Document introduces certain principles, it does not go into any substantive details on what the adoption of these principles into India’s regulatory framework would entail.

We will now examine these ‘Principles for Responsible AI’, their constituent elements and avenues for incorporating them into the Indian regulatory framework. The NITI Working Document proposed the following seven ‘Principles for Responsible AI’ to guide India’s regulatory framework for AI systems: 

  1. Safety and reliability
  2. Equality
  3. Inclusivity and Non-Discrimination
  4. Privacy and Security 
  5. Transparency
  6. Accountability
  7. Protection and Reinforcement of Positive Human Values. 

This post explores the principles of Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. A subsequent post will discuss the principles of Privacy and Security, Transparency, Accountability and the Protection and Reinforcement of Positive Human Values.

Principle of Safety and Reliability

The Principle of Reliability and Safety aims to ensure that AI systems operate reliably in accordance with their intended purpose throughout their lifecycle and ensures the security, safety and robustness of an AI system. It requires that AI systems should not pose unreasonable safety risks, should adopt safety measures which are proportionate to the potential risks, should be continuously monitored and tested to ensure compliance with their intended purpose, and should have a continuous risk management system to address any identified problems. 

Here, it is important to note the distinction between safety and reliability. The reliability of a system relates to the ability of an AI system to behave exactly as its designers have intended and anticipated. A reliable system would adhere to the specifications it was programmed to carry out. Reliability is therefore, a measure of consistency and establishes confidence in the safety of a system. Whereas, safety refers to an AI system’s ability to do what it is supposed to do without harming users (human physical integrity), resources or the environment.

Human oversight: An important aspect of ensuring the safety and reliability of AI systems is the presence of human oversight over the system. Any regulatory framework that is developed in India to govern AI systems must incorporate norms that specify the circumstances and degree to which human oversight is required over various AI systems. 

The level of involvement of human oversight would depend upon the sensitivity of the function and potential for significant impact on an individual’s life which the AI system may have. For example, AI systems deployed in the context of the provision of government benefits should have a high level of human oversight. Decisions made by the AI system in this context should be reviewed by a human before being implemented. Other AI systems may be deployed in contexts that do not need constant human involvement. However, these systems should have a mechanism in place for human review if a question is subsequently raised for review by, say a user. An example of this may be vending machines which have simple algorithms. Hence, the purpose for which the system is deployed and the impact it could have on individuals would be relevant factors in determining if ‘human in the loop’, ‘human on the loop’, or any other oversight mechanism is appropriate. 

Principle of Equality

The principle of equality holds that everyone, irrespective of their status in the society, should get the same opportunities and protections with the development of AI systems. 

Implementing equality in the context of AI systems essentially requires three components: 

(i) Protection of human rights: AI instruments developed across the globe have highlighted that the implementation of AI would pose risks to the right to equality, and countries would have to take steps to mitigate such risks proactively. 

(ii) Access to technology: The AI systems should be designed in a way to ensure widespread access to technology, so that people may derive benefits from AI technology.

(iii) Guarantees of equal opportunities through technology: The guarantee of equal opportunity relies upon the transformative power of AI systems to “help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge” and “produce social and economic benefits for all by reducing social inequalities and vulnerabilities.” AI systems will have to be designed and deployed such that they further the guarantees of equal opportunity and do not exacerbate and further entrench existing inequality.

The development, use and deployment of AI systems in society would pose the above-mentioned risks to the right to equality, and India’s regulatory framework for AI must take steps to mitigate such risks proactively.

Principle of Inclusivity and Non-Discrimination

The idea of non-discrimination mostly arises out of technical considerations in the context of AI. It holds that non-discrimination and the prevention of bias in AI should be mitigated in the training data, technical design choices, or the technology’s deployment to prevent discriminatory impacts. 

Examples of this can be seen in data collection in policing, where the disproportionate attention paid to neighbourhoods with minorities, would show higher incidences of crime in minority neighbourhoods, thereby skewing AI results. Use of AI systems becomes safer when they are trained on datasets that are sufficiently broad, and the datasets encompass the various scenarios in which the system is envisaged to be deployed. Additionally, datasets should be developed to be representative and hence avoid discriminatory outcomes from the use of the AI system. 

Another example of this can be semi-autonomous vehicles which experience higher accident rates among dark-skinned pedestrians due to the software’s poorer performance in recognising darker-skinned individuals. This can be traced back to training datasets, which contained mostly light-skinned people. The lack of diversity in the data set can lead to discrimination against specific groups in society. To ensure effective non-discrimination, AI policies must be truly representative of the society in its training data and ensure that no section of the populace is either over-represented or under-represented, which may skew the data sets. While designing the AI systems for deployment in India, the constitutional rights of individuals should be used as central values around which the AI systems are designed. 

In order to implement inclusivity in AI, the diversity of the team involved in design as well as the diversity of the training data set would have to be assessed. This would involve the creation of guidelines under India’s regulatory framework for AI to help researchers and programmers in designing inclusive data sets, measuring product performance on the parameter of inclusivity, selecting features to avoid exclusion and testing new systems through the lens of inclusivity.

Checklist Model: To address the challenges of non-discrimination and inclusivity a potential model which can be adopted in India’s regulatory framework for AI would be the ‘Checklist’. The European Network of Equality Bodies (EQUINET), in its recent report on ‘Meeting the new challenges to equality and non-discrimination from increased digitisation and the use of Artificial Intelligence’ provides a checklist to assess whether an AI system is complying with the principles of equality and non-discrimination. The checklist consists of several broad categories, with a focus on the deployment of AI technology in Europe. This includes heads such as direct discrimination, indirect discrimination, transparency, other types of equity claims, data protection, liability issues, and identification of the liable party. 

The list contains a series of questions which judges whether an AI system meets standards of equality, and identifies any potential biases it may have. For example, the question “Does the artificial intelligence system treat people differently because of a protected characteristic?” includes the parameters of both direct data and proxies. If the answer to the question is yes, the system would be identified as indulging in indirect bias. A similar checklist system, which has been contextualised for India, can be developed and employed in India’s regulatory framework for AI. 

Way forward

This post highlights some of the key aspects of the principles of Safety and Reliability, Equality, and Inclusivity and Non-Discrimination. Integration of these principles which have been identified in the NITI Working Document into India’s regulatory framework requires that we first clearly define their content, scope and ambit to identify the right mechanisms to operationalise them. Given the absence of any exploration of the content of these AI principles or the mechanism for their implementation in India in the NITI Working Document, we have examined the relevant international literature surrounding the adoption of AI ethics and suggested mechanisms for their adoption. The NITI Working Document has spurred discussion around designing an effective regulatory framework for AI. However, these discussions are at a preliminary stage and there is a need to develop a far more nuanced proposal for a regulatory framework for AI.

Over the last week, India has hosted the Responsible AI for Social Empowerment (RAISE) Summit which has involved discussions around India’s vision and roadmap for social transformation, inclusion and empowerment through Responsible AI. As we discuss mechanisms for India to effectively harness the economic potential of AI, we also need to design an effective framework to address the massive regulatory challenges emerging from the deployment of AI—simultaneously, and not as an afterthought post-deployment. While a few of the RAISE sessions engaged with certain aspects of regulating AI, there still remains a need for extensive, continued public consultations with a cross section of stakeholders to embed principles for Responsible AI in the design of an effective AI regulatory framework for India. 

For a more detailed discussion on these principles and their integration into the Indian context, refer to our comments to the NITI Aayog here. 

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s