India’s foray into the vertical regulation of AI technologies

By Nidhi Singh

AI governance has been trending in regulatory and policy circles over the recent years. Given the economic potential of AI and rapid developments in the field, many calls have been made to strengthen regulations on AI applications. In this post, we discuss some of the approaches to AI governance currently emerging in India. 

AI Regulation: Vertical vs Horizontal Approach

Globally, there are two broad approaches to AI regulation – the horizontal approach and the vertical approach. The debate between these approaches revolves around the scope and specificity of the regulations. A horizontal regulatory framework, exemplified by the European Union’s AI Act, seeks to provide overarching guidelines that apply uniformly across various sectors and applications of AI. This means that the AI Act applies to all uses of AI across sectors, from facial recognition technologies and self-driving cars to the use of AI in video games. This approach lays down a basic level of protection for all AI applications used in the EU, and uses a risk-based framework to provide stricter regulation for AI which has a greater impact on human rights. 

In contrast, a vertical approach involves tailoring regulations to address specific applications of AI, resulting in targeted governance. This allows for sector specific governance, such as China’s regulation on recommendation algorithms or its draft rules on Generative AI. Vertical regulations allow for more nuanced, sector-specific laws which can target specific concerns which are likely to come up in specialised fields like healthcare, insurance or fintech.  

Indian scenario – Horizontal approach

India does not currently follow any one specific approach to AI governance. The first concrete foray into the field of AI governance in India can be traced back to NITI Aayog’s National Strategy on Artificial Intelligence, released in 2018. This was followed by a range of other policies including the AI for All principles released in 2020 and 2021, and the Department of Telecom’s document on the AI stack. All of these documents followed a broad principles-based approach to AI governance, and focused on the development and application of AI ethics across sectors, to all AI applications in India. 

AI for All also discusses the idea of “contextualising AI governance to the prevailing constitutional morality”. This speaks to the broader idea of embedding constitutional principles such as non-discrimination, privacy, and the right to freedom of speech and expression into AI regulation, though the document does not indicate how this would be implemented. The documents also laid down broad principles for responsible AI such as the principle of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and the protection and reinforcement of positive human values. 

The use of this broad principles-based approach speaks more to the horizontal regulatory approach which would be applicable across sectors and not specifically dependent on the use cases in which AI is being deployed. This means that these principles would be applicable across sectors, and would govern the application of AI-based systems for insurance, employment, education and even its use in smart cities and self-driving cars. 

Shifting to the vertical approach

The AI regulatory landscape in India has changed over the last two years. In March 2023, the Indian council for Medical Research (ICMR) released the “Ethical guidelines for the application of Artificial Intelligence in biomedical research and healthcare”. This was the first set of guidelines that applied to the use of AI in the healthcare sector. The guidelines aimed to ensure ethical conduct and provide a set of guiding principles which could be used by experts and ethics committees while reviewing research proposals which involve the use of AI-based technologies. 

(Source: Ethical Guidelines for Application of Artificial
Intelligence in Biomedical Research and Healthcare, ICMR)

The guidelines recognize the increasing scope for the use of AI in hospitals, research and health care apps, and lay down comprehensive principles for the intersection of AI with medical research and healthcare. The guidelines set out an extensive framework, laying down protocols on how the current medical ethical guidelines must be adapted and changed to incorporate the use of AI, and how this would be implemented by different stakeholder groups. 

In another sector, the Smart Cities Mission launched the ‘AI Playbook for Cities’ in 2022. Recognising the potential for the use of AI-based applications in urban planning, the playbook was launched as an instrument to aid administrators in adopting and deploying AI solutions. 

(Infographic by Nidhi Singh)

The playbook talks about the principles of responsible AI which were released by NITI Aayog in 2020, but goes one step further to provide ways to contextualise these principles in the context of Smart Cities and to manage and mitigate risks brought about by AI technologies. The playbook states that while the ethical principles lay down a broad framework, they must be supplemented with more specific principles which lay down enforceable, targeted responsibilities for different types of stakeholders, such as industry, academia, and citizens.  

Implications for AI governance

This shift in India’s AI strategies from the initial horizontal framework in 2018 to a more vertical approach reflects the recognition of the need for nuanced regulations that can address the unique challenges and opportunities presented by distinct AI applications. This evolution signifies a growing acknowledgment of the importance of adapting governance structures to the diverse and rapidly evolving landscape of AI technologies. Overall, we can see an evolution from a purely horizontal approach to a mixed approach which focuses on more sector-specific applications of the AI principles. 

Over the last few years, there has been a growing recognition of the economic potential of AI. With both States and private entities jumping into the foray, there has been a drastic increase in the number of AI applications being used in India, as well as the scope for its use. However, there are currently no immediate plans to have AI specific regulations in India akin to the EU AI Act. 

While the upcoming Digital India Act may contain some provisions which regulate AI, there is a clear lack of formal governance structures at the moment. Given the potential impact the AI can have on human rights, labour markets and its economic potential broadly, leaving it completely unregulated poses a significant threat to the well-being of individuals and society as a whole. Without proper regulations in place, there is a heightened risk of AI systems being deployed in ways that infringe upon fundamental human rights, such as privacy and freedom from discrimination. Additionally, the unchecked proliferation of AI in labour markets could exacerbate existing inequalities and lead to widespread job displacement without adequate measures to support those affected. Furthermore, any use of AI systems by the State for welfare measures without safeguards could lead to widespread discrimination against vulnerable communities. 

Therefore, it is imperative for India to establish comprehensive regulatory frameworks that address the unique challenges posed by AI, ensuring that its benefits are maximised while its risks are mitigated. 

(The opinions expressed in the blog are personal to the author/writer. The University does not subscribe to the views expressed in the article / blog and does not take any responsibility for the same.)