The government wants to use automated facial recognition technology to track criminals, identify suspects, or find missing persons. Here’s why it should think twice before it does so.
This post first appeared on HuffPost.in on August 2, 2019
Last month, the National Crime Records Bureau (NCRB) issued a Request for Proposal (RFP) inviting bids for the creation of an Automated Facial Recognition System. Facial recognition works by identifying distinct points on an individual’s face and creating a unique map of it. It is therefore more akin to a fingerprint rather than a photograph.
The RFP envisages the creation of a database of photographs, which could help the police identify a potential suspect, a missing person or unidentified dead bodies. If implemented, the proposed database will be made available to police stations across the country, including as a mobile application to help officers who are in the field.
Attempts to modernize methods of investigation are undoubtedly essential. However, this RFP comes at a time when several city administrations in the United States of America have issued unequivocal bans prohibiting the use of facial recognition technology by law enforcement.
A similar sentiment was more recently echoed by British Parliamentarians, calling for a moratorium on the use of automatic facial recognition, including any trials, until concerns about the technology had been fully addressed, and a legislative framework had been established.
But most importantly, this RFP comes at a time when the Delhi Police itself has stated that the accuracy of its facial recognition systems is a dismal 2%.
Facial recognition is unreliable and prone to discriminatory outcomes
The cautious approach towards adoption of facial recognition technology internationally is encouraging given that there is now a growing body of scholarship warning against its use. There is mounting evidence in other countries to show that facial recognition systems are less accurate in identifying ethnic minorities and women, leading to a higher possibility of misidentification—and therefore discrimination—against communities that are already more vulnerable.
It is telling that the NCRB’s RFP appears to have been issued without any public consultation, or even a feasibility study to ascertain its usefulness. The RFP requires that the proposed system be able to run photos, video grabs, and even sketches against the database of images, in order to help identify a person of interest.
Given how inaccurate even sophisticated facial recognition systems can be, attempts to match such images may increase the danger of discriminatory outcomes exponentially.
Legitimate aim but disproportionate means
It is no one’s case that the police should not exploit technological advancements to improve criminal investigation techniques. However, the unreliability of facial recognition technology is exacerbated when deployed in in the absence of a legislative framework governing its limits.
If implemented, the proposed system would operate without any oversight on which images can be collected and stored by the NCRB, whether an individual is even aware that their image is a part of the database, and how long the NCRB is entitled to store it for.
The RFP further states that the proposed database will be populated using images from the passport database, the prisons database, images available with the Ministry of Women and Child Development, or ‘any other image database available with police’ or any other entity.
In its landmark decision affirming the right to privacy in 2017, the Supreme Court unequivocally held that privacy extended to public spaces.
This implies that this database is not limited to images of convicts, or even under trails, but could potentially include images of every resident, giving the police access to personal information without having to establish any cause.
It is easy to imagine how this can become a tool for harassment for vulnerable groups, minorities or activists. To take an example, the Delhi Government’s ambitious CCTV programme contemplates the police having access to footage generated from cameras installed across the city.
If implemented, the automated facial recognition system would enable the police to use CCTV footage from a peaceful protest and potentially identify – or worse, wrongly identify – citizens attending such a protest. This can be used to create ‘watchlists’, inviting excessive scrutiny and harassment at places such as airports and public events.
Without a legal framework, those on such watchlists would have no knowledge about being on this list, let alone contest their inclusion on it. The proposed facial recognition system can therefore have a disproportionate impact on the freedom of association and expression.
In its landmark decision affirming the right to privacy in 2017, the Supreme Court unequivocally held that privacy extended to public spaces. It also imposed an obligation on the state to ensure that citizens are not subject to indiscriminate collection and exploitation of their personal information. Even if personal information is required by the state for a legitimate purpose (such as criminal investigations), the means employed must be proportionate to achieving such purpose.
The Personal Data Protection Bill, 2018 drafted by the Srikrishna Committee, which otherwise exempts law enforcement from many of the obligations under the Bill, too mandates that collection of personal information is permissible only if pursuant to a law, and proportionate to its aims.
The routine collection of biometric personal information contemplated by the NCRB’s proposed facial recognition system makes it a tool for mass surveillance, rendering constitutional freedoms illusory.
For this and the fact that governments across the globe appear to exercising caution regarding its use, the NCRB would do well to reconsider its RFP on setting up a facial recognition system.
Kritika Bhardwaj is an advocate and a Fellow with the Centre for Communication Governance at National Law University Delhi