We Need to Ban Facial Recognition Altogether, Not Just Regulate Its Use

With automated electronic surveillance systems, suspicion does not precede data collection but is generated by the analysis of the data itself.

The Delhi police reportedly used automated facial recognition software (AFRS) to screen the crowd during Prime Minister Modi’s election rally in Delhi last December. This was also the first time Delhi police used facial images collected across protests in Delhi to identify protesters at the rally.

New categories of deviance such as ‘habitual protesters’, and ‘rowdy elements’ have emerged as faces of protesters are matched against existing databases and maintained for future law enforcement. Police departments in a growing number of states also claim to be using facial recognition and predictive analytics to capture criminals. The Railways intends to use AFRS at stations to identify criminals, linking the AFRS systems to existing data bases such as the Criminal Tracking Network.

Also read: Is Delhi Police’s Use of Facial Recognition to Screen Protesters ‘Lawful’?

The Telangana State Election Commission is considering using AFRS to identify voters during the municipal polls in Telangana. The home ministry recently announced its intention to install the world’s largest AFRS to track and nab criminals. AFRS adds to growing list of surveillance systems already in place in India, such as NATGRID and the Central Monitoring System, even while there continues to be little publicly available information about these programs. A recent study by Comparitech, places India after China and Russia in terms of surveillance and failure to provide privacy safeguards.

Automated facial recognition systems are a direct threat to the right to privacy. Unlike CCTV cameras, they allow for the automatic tracking and identification of individuals across place and time. Footage from surveillance cameras can be easily cross-matched, and combined with different databases, to yield a 360-degree view of individuals. As facial recognition systems combine constant bulk monitoring with individual identification, anonymity is further rendered impossible – there is no protection, or safety, even in numbers.

CCTV cameras, surveillance, Uttar Pradesh

Unlike CCTV cameras, facial recognition systems allow for automatic tracking and identification of individuals across place and time.

But much more is at stake than individual privacy.

AFRS can have a chilling effect on society, making individuals refrain from engaging in certain types of activity for fear of the perceived consequences of the activity being observed. As Daragh Murray points out, this chilling effect results in the curtailment of a far greater set of rights, such as the freedom of expression, association, and assembly. Taken together, this can undermine the very foundations of a participatory democracy.

With AFRS and other automated electronic surveillance systems, suspicion does not precede data collection, but is generated by the analysis of the data itself. To avoid suspicion, people will refrain from certain types of activity or expression; and the worry or threat, of not knowing what data is being collected or how it is being combined and analysed, can result in the self-censorship of a wide range of activities. Such surveillance, as Christian Fuchs points out, first creates a form of psychological and structural violence, which can then turn into physical violence.

Further, because surveillance operates as ‘a mechanism of social sorting’, of classifying individuals based on a set of pre-determined characteristics, and their likelihood of posing a risk to society, the chilling effect is likely to be experienced more severely by already discriminated against communities. Such social sorting is further likely to exacerbate identity politics in India, enforcing and exacerbating social divisions.

This is also why critiques of AFRS that point to their low accuracy rates or failure to identify certain skin tones miss the point entirely. A more effective system would pose an even greater threat to privacy, social sorting, and participatory democracy.

Much of the criticism around the deployment of AI-based technologies has highlighted issues of discrimination and exclusion; and how this can result in the violation of human rights. But the case of AFRS shows how AI systems can not only result in the violation or loss of rights, but are also productive of certain types of behaviour – creating a disciplinary society.

Further, because chilling effects is some sense rest on the occurrence of non-events – i.e not engaging in particular types of activities – frameworks based on identifying discrete violations of rights are likely to be inadequate. The case of AFRS thus highlights how conversations around AI governance need to move beyond the identification of immediately visible harm, at an individual level, to ask what kind of transformations are taking place at a structural level – how values of privacy, liberty, democracy and freedom are being recast.

Also read: India Is Falling Down the Facial Recognition Rabbit Hole

In India, as elsewhere, surveillance technologies have entered the public domain through a narrative of both safety and protection; and, consumer convenience and personalisation. The rhetoric of safety, for example, is behind the recent allocation of the Rs. 250 crore from the Nirbhaya fund for the installation of facial recognition cameras at 983 railway stations across the country.

The Delhi Police has registered 10 criminal cases against those involved in rioting and arson during the Anti-CAA Protests. Photo: PTI

The Delhi Police has registered 10 criminal cases against those involved in rioting and arson during the Anti-CAA Protests. Photo: PTI

Automated facial recognition software earlier procured to trace missing children in the country, are now being used to sort and profile citizens dissenters and peaceful protesters. This shows the folly of searching for the good use-cases of AI. AFRS, like other surveillance techniques, are also being routinised and normalised through the promise of consumer personalisation and convenience – whether the embrace of facial recognition to unlock an iPhone or people voluntarily signing up for AFRS at airports.

Mark Andrejevic has argued, for example, that the ‘key to the creation of digital enclosures today is the emphasis that has been given to the technologies of liberation, in particular, mobile phones and social networking sites.’ This ‘domestication of the discourse of interactivity’ has been crucial for expanding the means of surveillance. As a result, as Lyon notes, references to an Orwellian dystopia are ‘rendered inadequate because of the increasing importance of nonviolent and consumerist methods of surveillance.’

With the various government ministries seeking to employ AFRS, many have called for regulating the use of AFRS – that the conditions of its use must be specified as well as the necessary judicial processes established. But, regulating its use is not enough. Even if AFRS were permitted in only a few select instances, or after due process has been followed, the chilling effect on democracy will remain.

At a more practical level, the effectiveness of AFRS requires the collection of biometric facial data from all individuals, not only the targets of surveillance, or those suspected for criminal activity. Selective use also contributes to normalisation and routinisation (and over time, even more effective AFRS). Let’s not forget that many surveillance technologies are first tested in the criminal system before they are deployed for the broader public.

Even with adequate legal safeguards, and perfectly accurate facial recognition systems, the harms to society far outweigh any possible benefits. We need to ban the use of AFRS all together – to establish this as a necessary red line to preserve the health and future of democracy. Even while effecting such political change may seem a far cry in the current political climate, it is urgent to start building at least a normative consensus within civil society.

Also read: Delhi Police Is Now Using Facial Recognition Software to Screen ‘Habitual Protestors’

This conversation has already started in other corners of the world – San Francisco has already banned the use of AFRS by the police and all municipal agencies and the EU is considering banning the technology in public spaces for five years. Neither go far enough – an even better example could be Portland, Oregon, which is considering banning the use of AFRS by both government agencies and private businesses. While India continues to lack any frameworks for the governance and regulation of AI-based technologies, the case of AFRS highlights how this is an urgent priority.

AFRS will soon be complemented by systems for emotion and gait recognition; technologies that detect heartbeat and micro-biomes are also under development. We need to act now: as these technologies become more embedded in not only governance systems but only consumer habits, there will be fewer opportunities for course correction.

Urvashi Aneja is co-founder and director of Tandem Research and tweets at @urvashi_aneja. Angelina Chamuah is a research rellow at Tandem Research.

Delhi Police Videographers Failed to Film Violent Protesters at New Friends Colony

The Delhi Police may now end up with little evidence to show that the protesters initiated violence or torched buses.

New Delhi: As protests over the Citizenship (Amendment) Act (CAA) spilled over to the streets last month, the Delhi Police decided to film the demonstrations so that it could identify “rabble rousers and miscreants” through an Automated Facial Recognition System (AFRS), a software it acquired in March 2018.

However, a month into such protests, investigations have revealed that the videographers may have failed to capture the faces of those protesters “turning violent and indulging in stone pelting”.

According to the Indian Express, a Delhi Police FIR claimed that videographers deployed to film the anti-CAA demonstration at the New Friends Colony on December 15 failed to capture the faces of those protestors who turned violent.

On December 15, the students of Jamia Millia Islamia and residents of the locality attempted to march to parliament. When they were stopped by the police at Mathura Road, a section of protesters started pelting stones at the forces. It was said that they also torched a few buses and private vehicles, prompting the police forces to lathi-charge and storm the campus. It attacked students at the library and even entered hostels to nab students.

Since then, criticisms against the police have only grown. Democratic groups have felt that the heavy-handed use of force against students was disproportionate. However, the Delhi Police has been justifying its use of force on students on the basis of the violence that broke at the New Friends Colony.

However, with videographers having failed to identify “miscreants”, the Delhi Police may now end up with little evidence to show that the protesters initiated violence or torched buses.

All Jamia violence cases were handed over to the special investigation team (SIT) last week. “The SIT was surprised to learn that there was not a single video or photo showing people pelting stones or setting buses on fire. There are some videos and photographs taken from people’s phones and police are now scanning footage of CCTV cameras in the area. They are also examining raw footage from TV channels,” police sources told the Indian Express.

“While scanning videos recorded by hired videographers, the SIT found that they have footage of protesters climbing police barricades. Videos also show injured police personnel and buses and private vehicles on fire, but not the culprits,” sources said.

The SIT, meanwhile, arrested three more persons on charges of damaging public property and rioting. “Mohammed Rana (35), Mohammed Haroon (36) and Mohammed Hamid (38) are residents of Taimoor Nagar and were arrested in separate cases,” a senior officer told the daily.

These arrests were made with the help of local police constables, who helped the SIT identify them through mobile phone videos. Police have so far arrested 16 persons in connection to the violence at Jamia Nagar and New Friends Colony on charges of rioting, damage to public property and assaulting police personnel.

“None of the men are students of the university,” an official said.

Delhi Police Is Now Using Facial Recognition Software to Screen ‘Habitual Protestors’

Narendra Modi’s Ramlila Maidan event on December 22 was the first political event where the Automated Facial Recognition System was used to screen the crowd.

New Delhi: The Delhi police is now using Automated Facial Recognition System (AFRS), a software it acquired in March 2018, to screen alleged “rabble-rousers and miscreants”. This includes those who have been protesting against the Citizenship (Amendment) Act (CAA) and National Register of Citizens (NRC).

According to a report in the Indian Express, Narendra Modi’s Ramlila Maidan event on December 22 was the first political event where this software was used to screen the crowd. This was also reportedly the first time the Delhi police used a set of facial images collected through filming protests at various spots in the capital through the years to identify “law and order suspects”. Before this event, the software has only been used thrice – twice at Independence Day parades and once at a Republic Day parade.

“This dataset of ‘select protesters’, said sources, was put to use for the first time to keep ‘miscreants who could raise slogans or banners’ out of the Prime Minister’s rally last Sunday,” the newspaper report says.

All the people who attended the Ramlila Maidan event had to pass through a metal detector gate where a camera sent a live feed of their faces to a control room set up at the spot, the live feed was matched with the facial dataset within five seconds.

The Delhi police had acquired AFRS following a Delhi high court order in a case related to missing children. It was supposed to be used to identify lost and found boys and girls by matching photos.

This comes at a time when there have been protests across the country, some of which have turned violent resulting in clashes with the police. At least 25 people have been killed in the police crackdown so far. The police in Delhi, UP, Karnataka and Bihar have been accused of using disproportionate force against protestors. There have also been allegations of brutality and human rights violation against the police and state authorities. Prominent elected representatives, including PM Modi and UP CM Yogi Adityanath, have blamed the violence on protestors.

There is palpable fear among activists and civil society members that this technology can be wrongfully used to profile dissenters and members of a particular community. Responding to such fears, a spokesperson of Delhi police told the Indian Express that, “such datasets are not perpetual and are revised periodically. Racial or religious profiling is never a relevant parameter while building these datasets”.

All the footage collected by Delhi police during protest demonstrations is now being fed to the AFRS, which extracts “identifiable faces” of the protesters to its dataset. An unnamed source told the Indian Express that after extraction, images are manually screened to identify and retain “habitual protesters” and “rowdy elements”.

India Is Falling Down the Facial Recognition Rabbit Hole

Its use as an effective law enforcement tool is overstated, while the underlying technology is deeply flawed.

In a discomfiting reminder of how far technology can be used to intrude on the lives of individuals in the name of security, the Ministry of Home Affairs, through the National Crime Records Bureau, recently put out a tender for a new Automated Facial Recognition System (AFRS). 

The stated objective of this system is to “act as a foundation for a national level searchable platform of facial images,and to “[improve] outcomes in the area of criminal identification and verification by facilitating easy recording, analysis, retrieval and sharing of Information between different organizations.” 

The system will pull facial image data from CCTV feeds and compare these images with existing records in a number of databases, including (but not limited to) the Crime and Criminal Tracking Networks and Systems (or CCTNS), Interoperable Criminal Justice System (or ICJS), Immigration Visa Foreigner Registration Tracking (or IVFRT), Passport, Prisons, Ministry of Women and Child Development (KhoyaPaya), and state police records. 

Furthermore, this system of facial recognition will be integrated with the yet-to-be-deployed National Automated Fingerprint Identification System (NAFIS) as well as other biometric databases to create what is effectively a multi-faceted system of biometric surveillance.

It is rather unfortunate, then, that the government has called for bids on the AFRS tender without any form of utilitarian calculus that might justify its existence. The tender simply states that this system would be “a great investigation enhancer.” 

Also read: Humans Can’t Watch All the Surveillance Cameras Out There, so Computers Are

This confidence is misplaced at best. There is significant evidence that not only is a facial recognition system, as has been proposed, ineffective in its application as a crime-fighting tool, but it is a significant threat to the privacy rights and dignity of citizens. Notwithstanding the question of whether such a system would ultimately pass the test of constitutionality – on the grounds that it affects various freedoms and rights guaranteed within the constitution – there are a number of faults in the issued tender. 

Let us first consider the mechanics of a facial recognition system itself. Facial recognition systems chain together a number of algorithms to identify and pick out specific, distinctive details about a person’s face – such as the distance between the eyes, or shape of the chin, along with distinguishable ‘facial landmarks’. These details are then converted into a mathematical representation known as a face template for comparison with similar data on other faces collected in a face recognition database. There are, however, several problems with facial recognition technology that employs such methods. 

Facial recognition technology depends on machine learning – the tender itself mentions that the AFRS is expected to work on neural networks “or similar technology” –  which is far from perfect. At a relatively trivial level, there are several ways to fool facial recognition systems, including wearing eyewear, or specific types of makeup. The training sets for the algorithm itself can be deliberately poisoned to recognise objects incorrectly, as observed by students at MIT. 

More consequentially, these systems often throw up false positives, such as when the face recognition system incorrectly matches a person’s face (say, from CCTV footage) to an image in a database (say, a mugshot), which might result in innocent citizens being identified as criminals. In a real-time experiment set in a train station in Mainz, Germany, facial recognition accuracy ranged from 17-29% – and that too only for faces seen from the front – and was at 60% during the day but 10-20% at night, indicating that environmental conditions play a significant role in this technology.

Also read: Could Super Recognisers Be the Latest Weapon in the War on Terror?

Facial recognition software used by the UK’s Metropolitan Police has returned false positives in more than 98% of match alerts generated.

When the American Civil Liberties Union (ACLU) used Amazon’s face recognition system, Rekognition, to compare images of legislative members of the American Congress with a database of mugshots, the results included 28 incorrect matches.

There is another uncomfortable reason for these inaccuracies – facial recognition systems often reflect the biases of the society they are deployed in, leading to problematic face-matching results. Technological objectivity is largely a myth, and facial recognition offers a stark example of this. 

An MIT study shows that existing facial recognition technology routinely misidentifies people of darker skin tone, women and young people at high rates, performing better on male faces than female faces (8.1% to 20.6% difference in error rate), lighter faces than darker faces (11.8% to 19.2% difference in error rate) and worst on darker female faces (20.8% to 34.7% error rate). In the aforementioned ACLU study, the false matches were disproportionately people of colour, particularly African-Americans. The bias rears its head when the parameters of machine-learning algorithms, derived from labelled data during a “supervised learning” phase, adhere to socially-prejudiced ideas of who might commit crimes. 

The implications for facial recognition are chilling. In an era of pervasive cameras and big data, such prejudice can be applied at unprecedented scale through facial recognition systems. By replacing biased human judgment with a machine learning technique that embeds the same bias, and more reliably, we defeat any claims of technological neutrality. Worse, because humans will assume that the machine’s “judgment” is not only consistently fair on average but independent of their personal biases, they will read agreement of its conclusions with their intuition as independent corroboration. 

In the Indian context, consider that Muslims, Dalits, Adivasis and other SC/STs are disproportionately targeted by law enforcement. The NCRB in its 2015 report on prison statistics in India recorded that over 55% of the undertrials prisoners in India are either Dalits, Adivasis or Muslims, a number grossly disproportionate to the combined population of Dalits, Adivasis and Muslims, which amounts to just 39% of the total population according to the 2011 Census.

If the AFRS is thus trained on these records, it would clearly reinforce socially-held prejudices against these communities, as inaccurately representative as they may be of those who actually carry out crimes. The tender gives no indication that the developed system would need to eliminate or even minimise these biases, nor if the results of the system would be human-verifiable.

This could lead to a runaway effect if subsequent versions of the machine-learning algorithm are trained with criminal convictions in which the algorithm itself played a causal role. Taking such a feedback loop to its logical conclusion, law enforcement may use machine learning to allocate police resources to likely crime spots – which would often be in low income or otherwise vulnerable communities.

Adam Greenfield writes in Radical Machines on the idea of ‘over transparency,’ that combines “bias” of the system’s designers as well of the training sets – based as these systems are on machine learning – and “legibility” of the data from which patterns may be extracted. The “meaningful question,” then, isn’t limited to whether facial recognition technology works in identification – “[i]t’s whether someone believes that they do, and acts on that belief.”

The question thus arises as to why the MHA/NCRB believes this is an effective tool for law enforcement. We’re led, then, to another, larger concern with the AFRS – that it deploys a system of surveillance that oversteps its mandate of law enforcement. The AFRS ostensibly circumvents the fundamental right to privacy, as ratified by the Supreme Court in 2018, through sourcing its facial images from CCTV cameras installed in public locations, where the citizen may expect to be observed. 

Also read: Old Isn’t Always Gold: FaceApp and Its Privacy Policies

The extent of this surveillance is made even clearer when one observes the range of databases mentioned in the tender for the purposes of matching with suspects’ faces extends to “any other image database available with police/other entity” besides the previously mentioned CCTNS, ICJS et al. The choice of these databases makes overreach extremely viable.

This is compounded when we note that the tender expects the system to “[m]atch suspected criminal face[sic] from pre-recorded video feeds obtained from CCTVs deployed in various critical identified locations, or with the video feeds received from private or other public organization’s video feeds.” There further arises a concern with regard to the  process of identification of such “critical […] locations,” and if there would be any mechanisms in place to prevent this from being turned into an unrestrained system of surveillance, particularly with the stated access to private organisations’ feeds.

The Perpetual Lineup report by Georgetown Law’s Center on Privacy & Technology identifies real-time (and historic) video surveillance as posing a very high risk to privacy, civil liberties and civil rights, especially owing to the high-risk factors of the system using real-time dragnet searches that are more or less invisible to the subjects of surveillance.

It is also designated a “Novel Use” system of criminal identification, i.e., with little to no precedent as compared to fingerprint or DNA analysis, the latter of which was responsible for countless wrongful convictions during its nascent application in the science of forensic identification, which have since then been overturned.

In the Handbook of Face Recognition, Andrew W. Senior and Sharathchandra Pankanti identify a more serious threat that may be born out of automated facial recognition, assessing that “these systems also have the potential […] to make judgments about [subjects’] actions and behaviours, as well as aggregating this data across days, or even lifetimes,”  making video surveillance “an efficient, automated system that observes everything in front of any of its cameras, and allows all that data to be reviewed instantly, and mined in new ways” that allow constant tracking of subjects.

Such “blanket, omnivident surveillance networks” are a serious possibility through the proposed AFRS. Ye et al, in their paper on “Anonymous biometric access control”show how automatically captured location and facial image data obtained from cameras designed to track the same can be used to learn graphs of social networks in groups of people.

Consider those charged with sedition or similar crimes, given that the CCTNS records the details as noted in FIRs across the country. Through correlating the facial image data obtained from CCTVs across the country – the tender itself indicates that the system must be able to match faces obtained from two (or more) CCTVs – this system could easily be used to target the movements of dissidents moving across locations.

Constantly watched

Further, something which has not been touched upon in the tender – and which may ultimately allow for a broader set of images for carrying out facial recognition – is the definition of what exactly constitutes a ‘criminal’. Is it when an FIR is registered against an individual, or when s/he is arrested and a chargesheet is filed? Or is it only when an individual is convicted by a court that they are considered a criminal?

Additionally, does a person cease to be recognised by the tag of a criminal once s/he has served their prison sentence and paid their dues to society? Or are they instead marked as higher-risk individuals who may potentially commit crimes again? It could be argued that such a definition is not warranted in a tender document, however, these are legitimate questions which should be answered prior to commissioning and building a criminal facial recognition system.

Senior and Pankanti note the generalised metaphysical consequences of pervasive video surveillance in the Handbook of Face Recognition: 

“the feeling of disquiet remains [even if one hasn’t committed a major crime], perhaps because everyone has done something “wrong”, whether in the personal or legal sense (speeding, parking, jaywalking…) and few people wish to live in a society where all its laws are enforced absolutely rigidly, never mind arbitrarily, and there is always the possibility that a government to which we give such powers may begin to move towards authoritarianism and apply them towards ends that we do not endorse.”

Such a seemingly apocalyptic scenario isn’t far-fetched. In the section on ‘Mandatory Features of the AFRS’, the system goes a step further and is expected to integrate “with other biometric solution[sic] deployed at police department system like Automatic Fingerprint identification system (AFIS)[sic]” and “Iris.” This form of linking of biometric databases opens up possibilities of a dangerous extent of profiling.

While the Aadhaar Act, 2016, disallows Aadhaar data from being handed over to law enforcement agencies, the AFRS and its linking with biometric systems (such as the NAFIS) effectively bypasses the minimal protections from biometric surveillance the prior unavailability of Aadhaar databases might have afforded. The fact that India does not have a data protection law yet – and the Bill makes no references to protection against surveillance either – deepens the concern with the usage of these integrated databases. 

The Perpetual Lineup report warns that the government could use biometric technology “to identify multiple people in a continuous, ongoing manner [..] from afar, in public spaces,” allowing identification “to be done in secret”. Senior and Pankanti warn of “function creep,” where the public grows uneasy as “silos of information, collected for an authorized process […] start being used for purposes not originally intended, especially when several such databases are linked together to enable searches across multiple domains.”

This, as Adam Greenfield points out, could very well erode “the effectiveness of something that has historically furnished an effective brake on power: the permanent possibility that an enraged populace might take to the streets in pursuit of justice.”

What the NCRB’s AFRS amounts to, then, is a system of public surveillance that offers little demonstrable advantage to crime-fighting, especially as compared with its costs to fundamental human rights of privacy and the freedom of assembly and association. This, without even delving into its implications with regard to procedural law. To press on with this system, then, would be indicative of the government’s lackadaisical attitude towards protecting citizens’ freedoms. 

Karan Saini is a security researcher and programme officer with the Centre for Internet and Society. Prem Sylvester is a research intern at the Centre for Internet and Society. The views expressed by the authors in this article are personal.