Post authored by Prateek Sibal
The previous post on “Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part I” discussed recent advancements in AI technologies that have led to new commercial applications with potentially adverse social implications. We also considered the challenges of AI governance and discussed the role of technical benchmarks for evaluating AI systems.
In this post, we will explore the different AI risk assessment approaches that can underpin AI regulation. This post will conclude with a discussion on the next steps for national AI governance initiatives.
Artificial Intelligence Risk Assessment Frameworks
Risk assessments can help identify the AI systems that need to be regulated. Risk is determined by the severity of the impact of a problem and the probability of its occurrence. For example, the risk profile of a facial recognition system used to unlock a mobile phone would differ from a facial recognition system used by law enforcement in the public arena. The former may be beneficial as it adds a privacy-protecting security feature on the mobile phone. In contrast, the latter will have a chilling effect on free expression and privacy due to its mass surveillance capability. Therefore, the risk score for facial recognition systems will depend on their use and deployment context. This section will discuss some of the approaches followed by various bodies in developing risk assessment frameworks for AI systems.
European Commission’s approach
The European Commission’s legislative proposal on Artificial Intelligence classifies AI systems by four levels of risk and outline risk proportionate regulatory requirements. The categories proposed by the EU include:
- Unacceptable Risk: AI systems that pose a clear threat to people’s safety, livelihood, and rights fall under the category of unacceptable risk. The EU Commission has stated that applications that include social credit scoring systems and AI systems that can manipulate human behaviour will be banned.
- High Risk: AI systems that harm the safety or fundamental rights of people are categorised as high-risk. There are mandatory requirements for such systems, including the “quality of data sets used; technical documentation and record-keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity”. The EU will maintain an updated list of high-risk AI systems to respond to emerging challenges. At present, high-risk AI systems include AI algorithms used in transport systems, job hiring processes, border control and management, law enforcement, education systems, and democratic processes.
- Limited Risk: When the risks associated with the AI systems are limited, only transparency requirements are prescribed. For example, in the case of a customer engaging with an AI-based chatbot, the customer should be informed that they are interacting with an AI system.
- Minimal Risk: When the risk level is identified as minimal, there are no mandatory requirements, but the developers of such AI systems may voluntarily choose to follow industry standards. Examples of such applications include AI-enabled video games or spam filters.
The EU proposal bans real-time remote biometric identification like facial recognition systems installed in public spaces due to their adverse impact on fundamental rights like freedom of expression and privacy.
German approach
In Germany, the Data Ethics Commission has proposed a five-layer criticality pyramid that requires no regulation at a low-risk level to a complete ban at high-risk levels. Figure 2 presents the criticality pyramid and risk-adapted regulation framework for AI systems. The EU approach is similar to the German approach but differs in the number of levels.

UK approach
The AI Barometer Report of the Centre for Data Ethics and Innovation, tasked by the UK government to facilitate multistakeholder cooperation for developing the governance regime for data-driven technologies, identifies some common risks associated with AI systems and some sector-specific risks. The common risks include:
- Bias: Algorithmic bias and discrimination
- Explainability: Lack of explainability of AI systems
- Regulatory capacity: Regulatory capacities of the state, i.e. their capacity to develop and enforce regulation
- Data privacy: Breach in data privacy due to failure in user consent
- Public trust: Loss of public trust in institutions due to problematic AI and data use
The researchers identified that the severity of common risks varies across different sectors like criminal justice, financial services, health & social care, digital & social media and energy and utilities. For example, algorithmic bias leading to discrimination is considered high-risk in criminal justice, financial services, health and social media but medium risk in energy and utilities. The risk assignment, in this case, was done through expert discussions.
Organisation of Economic Cooperation and Development (OECD) approach
The OECD’s work on AI classification presents a model for classifying an AI system that can inform risk assessment under each class. The preliminary classification of AI systems developed by the OECD Network of Experts’ working group on AI classification has four dimensions:
- Context: The context in which an AI system is developed and deployed. Context includes stakeholders that deploy an AI system, the stakeholders impacted by its use and the sector in which an AI system is deployed.
- Data: Data and inputs to an AI system play a vital role in determining the system’s outputs based on the data classifiers used, the source of the data, its structure, scale, and how it was collected.
- Type of algorithm: The type of algorithms used in AI systems has implications for transparency, explainability, autonomy and privacy, among other principles. For example, an AI system can use a rules-based algorithm, which executes a series of pre-defined steps. Manufacturing robots used in assembly lines are an example of such a rules-based AI. In contrast, AI systems based on artificial neural networks (ANN) are inspired by the human brain’s structure and functions. These neural networks learn to solve problems by performing many iterations until they get the correct outcomes. In ANNs, the rules to reach a decision are developed by the AI model, and the decision-making process is opaque to humans.
- Task: The kind of task to be performed and the type of output expected vary across AI systems. AI systems can perform various tasks from forecasting, content personalisation to detection and recognition of voice or images.
Applying this classification framework to different cases, from facial recognition systems and medical devices to autonomous vehicles, allows us to understand the risks under each dimension and design appropriate regulation. In autonomous vehicles, the context of transportation and its significant risk of accidents increase the risk associated with AI systems. Such vehicles dynamically collect data and other inputs through sensors. They can suffer from security risks due to adversarial attacks where input data fed to the AI models can be tampered with, leading to accidents. The AI algorithms used in autonomous vehicles perform tasks like detecting road signs, deciding vehicle parameters like speed and direction, and responding to road conditions. If such decision-making happens without human control or oversight, it can pose significant risks to passengers and pedestrians’ lives. This example illustrates that autonomous vehicles can be considered a high-risk category requiring robust regulatory oversight to ensure public safety.
The four approaches to risk assessment discussed above are systematic attempts to understand AI-related risks and develop a foundation for downstream regulation that could address risks without being overly prescriptive.
Next Steps in Strengthening Risk-Adaptive Regulation for AI
This two-part blog series has framed the challenges of AI governance in terms of the Collingridge Dilemma concerning the social control of technology. Then it discussed the role of technical benchmarks in assessing the performance of AI systems vis. a vis. AI ethics principles. The section on AI risks assessment presents different approaches to identify AI applications and contexts that require regulation.
As the next step, national-level AI governance initiatives could work towards strengthening AI governance through:
- AI Benchmarking: Continuous development and updating of technical benchmarks for AI systems to assess their performance under different contexts with respect to AI ethics principles.
- Risk Assessments at the level of individual AI applications: Development of use cases and risk-assessment of different AI applications under different combinations of contexts, data and inputs, AI models and outputs.
- Systemic Risk Assessments: Analysis of risks at a systemic level, primarily when different AI systems interact. For example, in financial markets, AI algorithms interact with each other, and in certain situations, their interactions can cascade into a market crash.
Once AI risks are better understood, proportional regulatory approaches should be developed and subjected to Regulatory Impact Analysis (RIA). The OECD defines Regulatory Impact Analysis as a “systemic approach to critically assessing the positive and negative effects of proposed and existing regulations and non-regulatory alternatives”. RIAs can guide governments in understanding if the proposed regulations are effective and efficient in achieving the desired objective. As a complement to its legislative proposal for AI, the European Commission conducted an impact assessment of the proposed legislation and reported an aggregate compliance cost of between 100 and 500 million euros by 2025, mainly for high-risk AI applications that account for 5-15 per cent of all AI applications. The assessment analyses other factors like the impact of the legislation on the competitiveness of Small and Medium Enterprises (SMEs), additional budgetary responsibility on national governments and whether the measures proposed are proportionate to the objectives of the legislation. Such impact assessments are good regulatory practice and will be important as more countries work towards national AI legislations.
Finally, given the globalised nature of different AI services and products, countries should develop national-level regulatory approaches to AI in conversation with each other. Importantly, these dialogues at the global and national level should be multistakeholder driven to ensure that different perspectives inform any ensuing regulation. The pooling of knowledge and coordination on governing AI risks will lead to overall benefits by ensuring AI development in a manner that is ethically aligned while providing a stable environment for innovation and interoperability due to policy coherence.
The author would like to thank Jhalak Kakkar and Nidhi Singh for their helpful feedback.
This blog was written with the support of the Friedrich Naumann Foundation for Freedom.