Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part II

Post authored by Prateek Sibal

The previous post on “Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part I” discussed recent advancements in AI technologies that have led to new commercial applications with potentially adverse social implications. We also considered the challenges of AI governance and discussed the role of technical benchmarks for evaluating AI systems.

In this post, we will explore the different AI risk assessment approaches that can underpin AI regulation. This post will conclude with a discussion on the next steps for national AI governance initiatives.

Artificial Intelligence Risk Assessment Frameworks

Risk assessments can help identify the AI systems that need to be regulated.  Risk is determined by the severity of the impact of a problem and the probability of its occurrence. For example, the risk profile of a facial recognition system used to unlock a mobile phone would differ from a facial recognition system used by law enforcement in the public arena. The former may be beneficial as it adds a privacy-protecting security feature on the mobile phone. In contrast, the latter will have a chilling effect on free expression and privacy due to its mass surveillance capability. Therefore, the risk score for facial recognition systems will depend on their use and deployment context. This section will discuss some of the approaches followed by various bodies in developing risk assessment frameworks for AI systems.

European Commission’s approach

The European Commission’s legislative proposal on Artificial Intelligence classifies AI systems by four levels of risk and outline risk proportionate regulatory requirements. The categories proposed by the EU include:

  1. Unacceptable Risk: AI systems that pose a clear threat to people’s safety, livelihood, and rights fall under the category of unacceptable risk. The EU Commission has stated that applications that include social credit scoring systems and AI systems that can manipulate human behaviour will be banned.
  2. High Risk: AI systems that harm the safety or fundamental rights of people are categorised as high-risk. There are mandatory requirements for such systems, including the “quality of data sets used; technical documentation and record-keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity”. The EU will maintain an updated list of high-risk AI systems to respond to emerging challenges. At present, high-risk AI systems include AI algorithms used in transport systems, job hiring processes, border control and management, law enforcement, education systems, and democratic processes.
  3. Limited Risk: When the risks associated with the AI systems are limited, only transparency requirements are prescribed. For example, in the case of a customer engaging with an AI-based chatbot, the customer should be informed that they are interacting with an AI system.
  4. Minimal Risk: When the risk level is identified as minimal, there are no mandatory requirements, but the developers of such AI systems may voluntarily choose to follow industry standards. Examples of such applications include AI-enabled video games or spam filters.

The EU proposal bans real-time remote biometric identification like facial recognition systems installed in public spaces due to their adverse impact on fundamental rights like freedom of expression and privacy.

German approach

In Germany, the Data Ethics Commission has proposed a five-layer criticality pyramid that requires no regulation at a low-risk level to a complete ban at high-risk levels. Figure 2 presents the criticality pyramid and risk-adapted regulation framework for AI systems. The EU approach is similar to the German approach but differs in the number of levels.

Figure 2: Criticality pyramid and risk-adapted regulatory system for the use of algorithmic systems (Source: Opinion of the Data Ethics Commission)

UK approach

The AI Barometer Report of the Centre for Data Ethics and Innovation, tasked by the UK government to facilitate multistakeholder cooperation for developing the governance regime for data-driven technologies, identifies some common risks associated with AI systems and some sector-specific risks. The common risks include:

  1. Bias: Algorithmic bias and discrimination
  2. Explainability: Lack of explainability of AI systems
  3. Regulatory capacity: Regulatory capacities of the state, i.e. their capacity to develop and enforce regulation
  4. Data privacy: Breach in data privacy due to failure in user consent
  5. Public trust: Loss of public trust in institutions due to problematic AI and data use

The researchers identified that the severity of common risks varies across different sectors like criminal justice, financial services, health & social care, digital & social media and energy and utilities. For example, algorithmic bias leading to discrimination is considered high-risk in criminal justice, financial services, health and social media but medium risk in energy and utilities. The risk assignment, in this case, was done through expert discussions.

Organisation of Economic Cooperation and Development (OECD) approach

The OECD’s work on AI classification presents a model for classifying an AI system that can inform risk assessment under each class. The preliminary classification of AI systems developed by the OECD Network of Experts’ working group on AI classification has four dimensions:

  1. Context: The context in which an AI system is developed and deployed. Context includes stakeholders that deploy an AI system, the stakeholders impacted by its use and the sector in which an AI system is deployed.
  2. Data: Data and inputs to an AI system play a vital role in determining the system’s outputs based on the data classifiers used, the source of the data, its structure, scale, and how it was collected.
  3. Type of algorithm: The type of algorithms used in AI systems has implications for transparency, explainability, autonomy and privacy, among other principles. For example, an AI system can use a rules-based algorithm, which executes a series of pre-defined steps. Manufacturing robots used in assembly lines are an example of such a rules-based AI. In contrast, AI systems based on artificial neural networks (ANN) are inspired by the human brain’s structure and functions. These neural networks learn to solve problems by performing many iterations until they get the correct outcomes. In ANNs, the rules to reach a decision are developed by the AI model, and the decision-making process is opaque to humans.
  4. Task: The kind of task to be performed and the type of output expected vary across AI systems. AI systems can perform various tasks from forecasting, content personalisation to detection and recognition of voice or images.

Applying this classification framework to different cases, from facial recognition systems and medical devices to autonomous vehicles, allows us to understand the risks under each dimension and design appropriate regulation. In autonomous vehicles, the context of transportation and its significant risk of accidents increase the risk associated with AI systems. Such vehicles dynamically collect data and other inputs through sensors. They can suffer from security risks due to adversarial attacks where input data fed to the AI models can be tampered with, leading to accidents. The AI algorithms used in autonomous vehicles perform tasks like detecting road signs, deciding vehicle parameters like speed and direction, and responding to road conditions. If such decision-making happens without human control or oversight, it can pose significant risks to passengers and pedestrians’ lives. This example illustrates that autonomous vehicles can be considered a high-risk category requiring robust regulatory oversight to ensure public safety.

The four approaches to risk assessment discussed above are systematic attempts to understand AI-related risks and develop a foundation for downstream regulation that could address risks without being overly prescriptive.

Next Steps in Strengthening Risk-Adaptive Regulation for AI

This two-part blog series has framed the challenges of AI governance in terms of the Collingridge Dilemma concerning the social control of technology. Then it discussed the role of technical benchmarks in assessing the performance of AI systems vis. a vis. AI ethics principles. The section on AI risks assessment presents different approaches to identify AI applications and contexts that require regulation.

As the next step, national-level AI governance initiatives could work towards strengthening AI governance through:

  1. AI Benchmarking: Continuous development and updating of technical benchmarks for AI systems to assess their performance under different contexts with respect to AI ethics principles.
  2. Risk Assessments at the level of individual AI applications: Development of use cases and risk-assessment of different AI applications under different combinations of contexts, data and inputs, AI models and outputs.
  3. Systemic Risk Assessments: Analysis of risks at a systemic level, primarily when different AI systems interact. For example, in financial markets, AI algorithms interact with each other, and in certain situations, their interactions can cascade into a market crash.

Once AI risks are better understood, proportional regulatory approaches should be developed and subjected to Regulatory Impact Analysis (RIA). The OECD defines Regulatory Impact Analysis as a “systemic approach to critically assessing the positive and negative effects of proposed and existing regulations and non-regulatory alternatives”. RIAs can guide governments in understanding if the proposed regulations are effective and efficient in achieving the desired objective. As a complement to its legislative proposal for AI, the European Commission conducted an impact assessment of the proposed legislation and reported an aggregate compliance cost of between 100 and 500 million euros by 2025, mainly for high-risk AI applications that account for 5-15 per cent of all AI applications. The assessment analyses other factors like the impact of the legislation on the competitiveness of Small and Medium Enterprises (SMEs), additional budgetary responsibility on national governments and whether the measures proposed are proportionate to the objectives of the legislation. Such impact assessments are good regulatory practice and will be important as more countries work towards national AI legislations.

Finally, given the globalised nature of different AI services and products, countries should develop national-level regulatory approaches to AI in conversation with each other. Importantly, these dialogues at the global and national level should be multistakeholder driven to ensure that different perspectives inform any ensuing regulation. The pooling of knowledge and coordination on governing AI risks will lead to overall benefits by ensuring AI development in a manner that is ethically aligned while providing a stable environment for innovation and interoperability due to policy coherence.

The author would like to thank Jhalak Kakkar and Nidhi Singh for their helpful feedback.

This blog was written with the support of the Friedrich Naumann Foundation for Freedom.

Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part 1

Post authored by Prateek Sibal

In five years, between 2015 and 2020, 117 initiatives have published AI ethics principles worldwide. Despite a skewed geographical scope, with 91 of these initiatives emerging in Europe and North America, the proliferation of such initiatives on AI ethics principles paves the way for building global consensus on AI governance. Notably, the 37 OECD Member States have adopted the OECD AI Recommendation, the G20 has endorsed these principles, and the Global Partnership on AI is operationalising them. In the UN system, the United Nations Educational, Scientific and Cultural Organization (UNESCO) is developing a Recommendation on the Ethics of AI that 193 countries may adopt in 2021.

An analysis of different principles reveals a high-level consensus around eight themes: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. At the same time, ethical principles are criticised for lacking enforcement mechanisms. Companies often commit to AI ethics principles to improve their public image with little follow-up on implementing them; an exercise termed as “ethics washing”. Evidence also suggests that knowledge of the ethical tenets has little or no effect on whether software engineers factor in ethical principles in developing products or services.

Defining principles is essential, but it is only the first step for ethical AI governance. There is a need for mid-level norms, standards and guidelines at the international level that may inform regional or national regulation to translate principles into practice. This two-part blog will discuss the need for AI governance to evolve past the ‘ethics formation stage’ into concrete and tangible steps such as developing technical benchmarks and adopting risk-based regulation for AI systems.

Part one of the blog has three sections. The first section discusses some of the technical advances in AI technologies in recent years. These advances have led to new commercial applications with some potentially adverse social implications. Section two discusses the challenges of AI governance and presents a framework for mitigating the adverse implications of technology on society. Finally, section three discusses the role of technical benchmarks for evaluating AI systems. Part two of the blog will contain further discussion on risk assessment approaches to help identify the AI applications and contexts that need to be regulated.  It will also discuss the next steps for national initiatives for AI governance.

The blog follows the definition of an AI system proposed by the OECD’s AI Experts Group. They describe an AI system as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. It uses machine or human-based inputs to perceive real or virtual environments, abstract such perceptions into models (in an automated manner, e.g. with ML or manually), and use model inference to formulate options for information or action. AI systems are designed to operate with varying levels of autonomy.”

Recent Advances in AI Technologies

Artificial Intelligence is developing rapidly. It is important to lay down a broad overview of AI developments, which may have profound and potentially adverse impacts on individuals and society. The 2021 AI Index report notes four crucial technical advances that hastened the commercialisation of AI technologies:

  • AI-Generated Content: AI systems can generate high-quality text, audio and visual content to a level that it is difficult for humans to distinguish between synthetic and non-synthetic content.
  • Image Processing: Computer vision, a branch of computer science that “works on enabling computers to see, identify and process images in the same way that human vision does, and then provide appropriate output”, has seen immense progress in the past decade and is fast industrialising in applications that include autonomous vehicles.
  • Language Processing: Natural Language Processing (NLP) is a branch of computer science “concerned with giving computers the ability to understand the text and spoken words in much the same way human beings can”. NLP has advanced such that AI systems with language capabilities now have meaningful economic impact through live translations, captioning, and virtual voice assistants.
  • Healthcare and biology:DeepMind’s AlphaFold solved the decades-old protein folding problem using machine learning techniques. This breakthrough will allow the study of protein structure and will contribute to drug discovery.

These technological advances have social implications. For instance, the technology generating synthetic faces has rapidly improved. As shown in Figure 1, in 2014, AI systems produced grainy faces, but by 2017, they were generating realistic synthetic faces. Such AI systems have led to the proliferation of ‘deepfake’ pornography that overwhelmingly targets women and has the potential to erode people’s trust in information and videos they encounter online. Some actors misuse the deepfake technology to spread online disinformation, resulting in adverse implications for democracy and political stability. Such developments have made AI governance a pressing matter.


Figure 1: Improvement in AI-generated images. Source: https://arxiv.org/pdf/1802.07228.pdf

Challenges of AI Governance

In this blog, AI governance is understood as the development and application by governments, the private sector, and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape AI’s evolution and use. As highlighted in the previous section, the rapid advancements in the field of AI technologies have brought the need for better AI governance to the forefront.

In thinking about AI governance, a conundrum that preoccupies many governments worldwide concerns enactment of regulation that does not stifle innovation while also providing adequate safeguards to protect human rights and fundamental freedoms.

Technology regulation is complicated because until a technology has been extensively developed and widely used, its impact on society is difficult to predict. However, once it is deeply entrenched and its effect on society is understood better, it becomes more challenging to regulate the technology. This tension between free and unimpeded technology development and regulating adverse implications is termed the Collingridge dilemma.

David Collingridge, the author of the Social Control of Technologies, noted that when regulatory decisions have to be made under ignorance of technologies’ social impact, continuous monitoring of the impact of technology on society can help correct unexpected consequences early. Collingridge’s guidelines for decision-making under ignorance can inform AI governance as well. These include choosing technology options with:

  • Low failure costs: Selecting options with low error costs, i.e. if a policy or regulation fails to achieve its intended objective, the costs associated with failure are limited.
  • Quicker to correct: Selecting technologies with low response time for correction after the discovery of unanticipated problems.
  • Low cost of applying remedy: Selecting solutions with low cost of applying the remedy, i.e. options with a low fixed cost and a higher variable cost, should be given preference over the ones with a higher fixed cost, and
  • Continuous monitoring: Cost-effective and efficient monitoring can ensure the discovery of unpredicted consequences quickly.

For instance, the requirements around transparency in AI systems provide information for monitoring the impact of AI systems on society. Similarly, risk assessments of AI systems offer a pre-emptive form of oversight over technology development and use, which can help minimise potential social harms.  

Technical benchmarks for evaluating AI systems

To address ethical problems related to bias, discrimination, lack of transparency, and accountability in algorithmic decision-making,  quantitative benchmarks to assess AI systems’ performance against these ethical principles are needed.

The Institute of Electrical and Electronics Engineers (IEEE), through its Global Initiative on Ethics of Autonomous and Intelligent Systems, is developing technical standards, including on bias in AI systems. They describe “specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithms”. Similarly, in the United States, the National Institute of Standards and Technology (NIST) is developing standards for explainable AI based on principles that call for AI systems to provide reasons for their outputs in a manner that is understandable to individual users, explain the process used for generating the output, and deliver their decision only when the AI system is fully confident.

For example, there is significant progress in introducing benchmarks for the regulation of facial recognition technology. Facial recognition systems have a large commercial market. They and used for various tasks, including law enforcement and border controls. These tasks involve detecting visa photos, matching photos in criminal databases, and child abuse images. Such facial recognition systems have been the cause of significant concern due to high error rates in detecting faces and impinging on human rights. Biases in such systems have adverse consequences for individuals denied entry at borders or wrongfully incarcerated. In the United States, the National Institute of Standards and Technology’s Face Recognition Vendor Test provides a benchmark to compare different commercially available facial recognition systems’ performance by operating their algorithms on different image datasets.

The progress in defining benchmarks for ethical principles needs to be complemented by risk assessments of AI systems to pre-empt potentially adverse social impact in line with the Collingridge Dilemma discussed in the previous section. Risk assessments allow the categorisation of AI applications by their risk ratings. They can help develop risk-proportionate regulation for AI systems instead of blanket rules that may place an unnecessary compliance burden on technology development. The next blog in this two-part series will engage with potential risk-based approaches to AI regulation.

The author would like to thank Jhalak Kakkar and Nidhi Singh for their helpful feedback.

This blog was written with the support of the Friedrich Naumann Foundation for Freedom.