‘Pandora’s Box of Privacy Issues’: Experts on Delhi Govt Schools’ Use of Facial Recognition Tech

An RTI query by the Internet Freedom Foundation found that 12 Delhi government schools were in the process of installing facial recognition technology.

New Delhi: After initiating a project to install closed circuit television (CCTV) cameras in classrooms and schools, the Delhi government is now pushing ahead with the surveillance programme by implementing the use of facial recognition technology (FRT) in its schools.

The move has come at a time when cyber experts and lawyers have warned against exposing citizens, and especially children, to such monitoring which can be misused by cybercriminals and hackers.

At least 12 Delhi government schools confirmed that they had or were in the process of installing facial recognition technology in response to a Right to Information (RTI) query filed with the Directorate of Education in December 2020 by Anushka Jain, an associate counsel (Transparency and Right to Information) at Internet Freedom Foundation, a digital liberties organisation.

RTI had asked about SOP, regulations for use of FRT and CCTVs

Speaking to The Wire, Jain said the project was launched by the Aam Aadmi Party government in 2019 for installing CCTVs in all Delhi government schools. “We asked the Education Department of the Delhi government at the end of 2020 if they were using facial recognition and they said yes. Most of the schools have also replied that it is in the process of happening or they are getting the information from the Education Department,” Jain said.

On privacy concerns and the safety of children, Jain said that since it was an RTI application, it was not possible to send open-ended queries and so, specific questions were asked. “We asked them whether there was any standard operating procedure or regulations were in place, about the duration and purpose of installation of CCTVs,” Jain said.

Also read: Policing or Protection: Parents Ponder as SC Refuses to Stay Delhi Govt’s School CCTV Project

Twelve schools confirmed use of FRT

The Internet Freedom Foundation filed the RTI with the education department, which forwarded the query to all the schools under the Delhi government. Around 150 schools have responded.

“Out of this, 12 said yes the facial recognition system has been installed or it was in process,” Jain said.

Now, Jain has filed a follow-up RTI to get more direct answers from the Delhi government. In the new RTI filed earlier this month with the education department, the Delhi government has been asked if there is an SOP for the collection and processing of information which is collected through the CCTV cameras; the duration for which they stored the information, the purpose for which it is collected, processed and stored; and the kind of software and hardware that is being used for CCTVs.

‘Follow up query sent to seek more details’

The group has also asked for details of the total number of CCTV cameras that have been installed and the total expenditure incurred. It has also asked which legislation or rule authorised the Delhi government to use facial recognition technology on students in schools and whether any legal opinion was sought by the government prior to procuring the technology.

The questions also cover the issues pertaining to the cost-benefit analysis, feasibility study; privacy impact assessment; if there was any guideline policy or rule in place to govern the use of facial recognition and from which companies the CCTVs were obtained.

Students wearing protective face masks are seen inside a classroom of a government-run school after authorities ordered schools to reopen voluntarily for classes 9 to 12, in Gurugram, India October 15, 2020. Photo: Reuters/Anushree Fadnavis

Speaking about Project Panoptic of IFF, which aims to bring transparency and accountability to the relevant government stakeholders involved in the deployment and implementation of facial recognition technology (FRT) projects in India, Jain said the organisation has been following facial recognition projects by the governments, at the Centre and at the state levels. “On our website, panoptic.in, there is a map of India and you can see the number of such projects in different states. We are currently tracking around 40 projects all over the country, both central and state.”

Also read: Delhi Govt Flouting Rules by Giving Out Personal Details of RTI Applicants

Various states using FRT for police surveillance, voter verification

Jain said the project has also obtained information on how different police departments across the states are using facial recognition. The project also has information on how the Telangana State Election Commission is using facial recognition for voter verification.

As to what prompted the group to seek information regarding the Delhi government’s project, Jain said, “We were worried about what the Delhi Government was doing because usually these systems are used for authentication of identity – such as for voter verification – and for security and surveillance purposes, like the police makes use of them.”

‘Use of FRT with CCTVs in schools is totally unimaginable’

Stating that the use of this technology in schools does not make sense – especially through CCTVs installed in classrooms – because it is extremely excessive – Jain said “to use this technology on children and invade their privacy is something which is obviously not proportional to any issue that they think they are going to solve through it.”

The IFF is also trying to understand why the Delhi Government undertook the project. “Even CCTVs are an invasion of privacy but some parents were okay as they wanted to see what their children were doing, but why facial recognition technology is being used in conjunction with CCTVs in schools is totally unimaginable,” said Jain.

‘Use of FRT fraught with dangers’

Cyber law expert and senior advocate, Pawan Duggal, said the move to use facial recognition technology in schools was fraught with many dangers. “Putting up CCTV cameras in schools opens up a Pandora’s Box of legal issues, specifically privacy issues. These issues have to be appropriately addressed given the fact that we all have a fundamental right to privacy, which derives from the judgment of Justice K.S. Puttaswamy versus Union of India.”

Also read: Kejriwal’s Move to Install CCTV in Classrooms Raises Concerns About Impact of Surveillance

Pointing out that schools are public places and people still have an expectation of privacy there, he said, “so if you are going to put CCTV cameras in schools, this could be hampering children’s fundamental right to privacy. And this could also expose the concerned institutions to wrong exposure because the CCTV camera in school is not a necessity.”

‘No law passed by parliament authorises CCTV use in schools’

Duggal questioned if the move to use FRT and CCTVs had any legal sanction. “The right to privacy is linked to the fundamental right to life under Article 21 and it can only be deprived in accordance with procedure established by law. This means that if the parliament has passed a law which authorised such CCTV cameras in schools then you can deprive me of the right to privacy through that law and not otherwise,” he elaborated.

“As of now,” Duggal said, “there is no law passed by parliament which mandates the installation of CCTV cameras in schools. In the absence of any such law, the entire experimental exercise is bound to be fraught with a lot of legal challenges. Any student or any parent can go and challenge it. The parents can go in for writ petitions in the courts and more significantly it could also involve other things – because you are now capturing children’s pictures under circumstances when they have an expectation of privacy.”

A facial recognition system at work. Photo: Reuters/Bobby Yip

‘Putting CCTVs in schools is unauthorised monitoring’

Duggal said when students come to school, they come to study, they don’t want to be monitored. “So if you are putting your CCTV cameras in school it would tantamount to unauthorised monitoring. This opens up a much bigger debate. This is a gross violation of children’s rights. I think governments must go slow in this regard till the time they pass a law in parliament.”

‘Surveillance can become a tool of misuse by governments’

The cyber law expert cautioned that data being obtained through FRT and CCTVs was liable to be misused in the absence of proper preservation. “The biggest fears in such cases are the subject matter under surveillance are minors. They don’t have the capacity to determine how they are being monitored. Such surveillance can also become a big tool of misuse in the hands of the relevant governments,” he said.

Also read: Why Delhi School Teachers Are Wary About Increasing Surveillance and Intimidation

He added that “if you collect children’s data, this is sensitive data, this is not normal data, and if it is not being properly handled or dealt with the chances of this entire data being misused or hacked can be very much present.”

‘Data on children commands a premium on dark web’

Duggal also warned that the data on children could end up being used in many ways. “A lot of things can happen. Children’s pictures can be picked up. We do not know how the data is being preserved. The cybersecurity ramifications are huge. Already children’s data is sold for premium in the market, on the dark web. So if you are doing a CCTV camera and if the footage is left at a place where there is no adequate cybersecurity, it can be hacked by cybercriminals and hackers.”

Finally, he said, these are issues that have a direct bearing on the enjoyment of personal privacy and data privacy. “So to merely say that the system is being used to monitor or match images does not really help. It has no legal sanctity in the eyes of law.”

The Wire has sent a questionnaire to the Principal Secretary (Education), H. Rajesh Prasad, on secyedu@nic.in and Director, Directorate of Education, Udit Prakash Rai, at diredu@nic.in, seeking their response on the use of the facial recognition technology, its necessity, enabling legislation and rules, and whether it was the AAP government or the lieutenant governor’s office which sought its use. The story will be updated as and when their response is received.

We Need to Ban Facial Recognition Altogether, Not Just Regulate Its Use

With automated electronic surveillance systems, suspicion does not precede data collection but is generated by the analysis of the data itself.

The Delhi police reportedly used automated facial recognition software (AFRS) to screen the crowd during Prime Minister Modi’s election rally in Delhi last December. This was also the first time Delhi police used facial images collected across protests in Delhi to identify protesters at the rally.

New categories of deviance such as ‘habitual protesters’, and ‘rowdy elements’ have emerged as faces of protesters are matched against existing databases and maintained for future law enforcement. Police departments in a growing number of states also claim to be using facial recognition and predictive analytics to capture criminals. The Railways intends to use AFRS at stations to identify criminals, linking the AFRS systems to existing data bases such as the Criminal Tracking Network.

Also read: Is Delhi Police’s Use of Facial Recognition to Screen Protesters ‘Lawful’?

The Telangana State Election Commission is considering using AFRS to identify voters during the municipal polls in Telangana. The home ministry recently announced its intention to install the world’s largest AFRS to track and nab criminals. AFRS adds to growing list of surveillance systems already in place in India, such as NATGRID and the Central Monitoring System, even while there continues to be little publicly available information about these programs. A recent study by Comparitech, places India after China and Russia in terms of surveillance and failure to provide privacy safeguards.

Automated facial recognition systems are a direct threat to the right to privacy. Unlike CCTV cameras, they allow for the automatic tracking and identification of individuals across place and time. Footage from surveillance cameras can be easily cross-matched, and combined with different databases, to yield a 360-degree view of individuals. As facial recognition systems combine constant bulk monitoring with individual identification, anonymity is further rendered impossible – there is no protection, or safety, even in numbers.

CCTV cameras, surveillance, Uttar Pradesh

Unlike CCTV cameras, facial recognition systems allow for automatic tracking and identification of individuals across place and time.

But much more is at stake than individual privacy.

AFRS can have a chilling effect on society, making individuals refrain from engaging in certain types of activity for fear of the perceived consequences of the activity being observed. As Daragh Murray points out, this chilling effect results in the curtailment of a far greater set of rights, such as the freedom of expression, association, and assembly. Taken together, this can undermine the very foundations of a participatory democracy.

With AFRS and other automated electronic surveillance systems, suspicion does not precede data collection, but is generated by the analysis of the data itself. To avoid suspicion, people will refrain from certain types of activity or expression; and the worry or threat, of not knowing what data is being collected or how it is being combined and analysed, can result in the self-censorship of a wide range of activities. Such surveillance, as Christian Fuchs points out, first creates a form of psychological and structural violence, which can then turn into physical violence.

Further, because surveillance operates as ‘a mechanism of social sorting’, of classifying individuals based on a set of pre-determined characteristics, and their likelihood of posing a risk to society, the chilling effect is likely to be experienced more severely by already discriminated against communities. Such social sorting is further likely to exacerbate identity politics in India, enforcing and exacerbating social divisions.

This is also why critiques of AFRS that point to their low accuracy rates or failure to identify certain skin tones miss the point entirely. A more effective system would pose an even greater threat to privacy, social sorting, and participatory democracy.

Much of the criticism around the deployment of AI-based technologies has highlighted issues of discrimination and exclusion; and how this can result in the violation of human rights. But the case of AFRS shows how AI systems can not only result in the violation or loss of rights, but are also productive of certain types of behaviour – creating a disciplinary society.

Further, because chilling effects is some sense rest on the occurrence of non-events – i.e not engaging in particular types of activities – frameworks based on identifying discrete violations of rights are likely to be inadequate. The case of AFRS thus highlights how conversations around AI governance need to move beyond the identification of immediately visible harm, at an individual level, to ask what kind of transformations are taking place at a structural level – how values of privacy, liberty, democracy and freedom are being recast.

Also read: India Is Falling Down the Facial Recognition Rabbit Hole

In India, as elsewhere, surveillance technologies have entered the public domain through a narrative of both safety and protection; and, consumer convenience and personalisation. The rhetoric of safety, for example, is behind the recent allocation of the Rs. 250 crore from the Nirbhaya fund for the installation of facial recognition cameras at 983 railway stations across the country.

The Delhi Police has registered 10 criminal cases against those involved in rioting and arson during the Anti-CAA Protests. Photo: PTI

The Delhi Police has registered 10 criminal cases against those involved in rioting and arson during the Anti-CAA Protests. Photo: PTI

Automated facial recognition software earlier procured to trace missing children in the country, are now being used to sort and profile citizens dissenters and peaceful protesters. This shows the folly of searching for the good use-cases of AI. AFRS, like other surveillance techniques, are also being routinised and normalised through the promise of consumer personalisation and convenience – whether the embrace of facial recognition to unlock an iPhone or people voluntarily signing up for AFRS at airports.

Mark Andrejevic has argued, for example, that the ‘key to the creation of digital enclosures today is the emphasis that has been given to the technologies of liberation, in particular, mobile phones and social networking sites.’ This ‘domestication of the discourse of interactivity’ has been crucial for expanding the means of surveillance. As a result, as Lyon notes, references to an Orwellian dystopia are ‘rendered inadequate because of the increasing importance of nonviolent and consumerist methods of surveillance.’

With the various government ministries seeking to employ AFRS, many have called for regulating the use of AFRS – that the conditions of its use must be specified as well as the necessary judicial processes established. But, regulating its use is not enough. Even if AFRS were permitted in only a few select instances, or after due process has been followed, the chilling effect on democracy will remain.

At a more practical level, the effectiveness of AFRS requires the collection of biometric facial data from all individuals, not only the targets of surveillance, or those suspected for criminal activity. Selective use also contributes to normalisation and routinisation (and over time, even more effective AFRS). Let’s not forget that many surveillance technologies are first tested in the criminal system before they are deployed for the broader public.

Even with adequate legal safeguards, and perfectly accurate facial recognition systems, the harms to society far outweigh any possible benefits. We need to ban the use of AFRS all together – to establish this as a necessary red line to preserve the health and future of democracy. Even while effecting such political change may seem a far cry in the current political climate, it is urgent to start building at least a normative consensus within civil society.

Also read: Delhi Police Is Now Using Facial Recognition Software to Screen ‘Habitual Protestors’

This conversation has already started in other corners of the world – San Francisco has already banned the use of AFRS by the police and all municipal agencies and the EU is considering banning the technology in public spaces for five years. Neither go far enough – an even better example could be Portland, Oregon, which is considering banning the use of AFRS by both government agencies and private businesses. While India continues to lack any frameworks for the governance and regulation of AI-based technologies, the case of AFRS highlights how this is an urgent priority.

AFRS will soon be complemented by systems for emotion and gait recognition; technologies that detect heartbeat and micro-biomes are also under development. We need to act now: as these technologies become more embedded in not only governance systems but only consumer habits, there will be fewer opportunities for course correction.

Urvashi Aneja is co-founder and director of Tandem Research and tweets at @urvashi_aneja. Angelina Chamuah is a research rellow at Tandem Research.

Is Delhi Police’s Use of Facial Recognition to Screen Protesters ‘Lawful’? 

The eventual legal answer may lie in whether there is a clear distinction between physical and virtual intrusion of privacy.

Protests have erupted across the country over the past two months, and with them, new methods to crack down on dissent.

The Delhi police, for instance, recently used Automated Facial Recognition Technology (AFRS) to screen crowds at political rallies against the Citizenship (Amendment) Act (CAA). The city’s law enforcement then compared this data with a pre-existing database of more than two lakh protestors it reportedly perceives as ‘antisocial elements’. 

The use of AFRS has met with considerable ire of experts since it was not introduced through a legislation passed by parliament and that this renders it unlawful. 

This criticism is based on the observations of the Supreme Court in the Puttaswamy case, which acknowledged that privacy is a fundamental right. 

The plurality opinion, delivered by Justice D.Y. Chandrachud, laid down a three fold-test to impose reasonable restrictions on individual privacy.

Also Read: Delhi Police Is Now Using Facial Recognition Software to Screen ‘Habitual Protestors’

First, there must exist a law imposing the restriction (on privacy). Second, there must be a legitimate state aim that the law seeks to pursue; Third, there must be a rational nexus between the intended aim and the means (i.e. nature of restriction) adopted to achieve it.

However, from the judgment, it is not clear whether by using the word ‘law’ the judges meant only statutory law or even common law (derived from custom and judicial decisions common to courts across England and Commonwealth states).  

If the interpretation of ‘law’ in the (Puttaswamy) three-pronged test includes non-statutory common law, an express statutory provision enabling surveillance through AFRS is not necessary and the Delhi Police could use AFRS in the absence of any parliamentary law enabling such use. Establishing that surveillance through AFRS is an exercise of powers derived from common law would be sufficient. Discerning the correct interpretation of ‘law’ is relevant since there is no express statutory provision enabling AFRS. 

Supreme Court. Photo: PTI

A statutory provision which may be somewhere close to enabling the use of AFRS is section 31 of the Police Act, 1861, which creates a duty for the police to keep order on public roads, streets and resorts and to prevent any obstructions due to assemblies. Of course, it would be a stretch to argue that correlative to this public duty, there exists the power to use AFRS in streets in Delhi. Eventually, however, whether the power to use AFRS is implicit in this duty or not is a matter of judicial interpretation.

Similarly, it is true that the Criminal Procedure Code (CrPC, e.g. under section 165) grants police the power to conduct a warrantless search if it has reasonable grounds to believe that waiting to obtain a warrant would cause undue delay. Moreover, section 151 of the CrPC empowers the police to arrest without a warrant to prevent the commission of cognizable offences. However, it would be sweeping to suggest that the power to conduct warrantless search includes the use of AFRS indiscriminately on a public street. 

Deriving power from common law

Interestingly, back in 2005, the Supreme Court in District Registrar and Collector, Hyderabad v. Canara Bank incidentally made an observation on the extent to which common law permits intrusion into someone’s privacy. The apex court, in this case, was concerned with a challenge to the constitutionality of a statute enabling the collector to authorise access to documents placed in the custody of a bank. In passing, the court acknowledged that restrictions on privacy can be imposed not only by statutory provisions but also in rare exceptional circumstances, administrative action deriving its powers from common law (“such as where warrantless searches could be conducted but these must be in good faith, intended to preserve evidence or intended to prevent sudden danger to person or property”). In fact, common law, according to a recent decision of the United Kingdom High Court (UKHC) in Edward Bridges v The Chief Constable of South Wales, enables the use of facial recognition technology by the police. 

Before the UKHC, Edward Bridges, a civil liberties campaigner from Cardiff challenged the use of facial recognition technology to capture his images in two instances – first, at Queen Street, a busy shopping area; and second, when he was at an exhibition in the Motorpoint Arena at Cardiff. Edward’s claim in his suit inter alia was that the AFRS deployed by the South Wales Police had no statutory basis and was therefore not lawful. 

The court, however, rejected this claim. It observed that a police constable’s obligations are non-exhaustive and include taking all steps necessary for the maintenance of peace and protection of property. The police need not express statutory powers to use CCTVs or AFRS for policing purposes. The power to use facial recognition technology is inherent in common law. 

Police powers to conduct surveillance, however, do not authorise intrusive methods of obtaining information through entry upon private property. Interestingly, the UKHC distinguished between virtual intrusion and collection of biometrics (such as through AFRS) from physical searches (e.g. physical collection of fingerprints, DNA swabs, etc). While the former, according to the court, does not require enabling legislation, the latter needs to flow from statute since it would otherwise constitute a physically intrusive act (such as an assault). 

The court’s logic is this: the use of AFRS does not involve any physical entry, contact or force necessary to obtain biometric data. It simply involves capturing images and using algorithms to match the image with a face on a watchlist or database. Taking of fingerprints, however, requires cooperation, or the use of force on the individual which would otherwise constitute assault; a physically intrusive act.  

A DNA swab. Photo: Reuters/Michaela Rehle

A hierarchy between physical and virtual searches

In other words, the court creates a hierarchy between physical searches and virtual searches, characterising the former as more (i.e. physically) intrusive. This understanding of intrusion from AFRS deviates from the more liberal position adopted by the Supreme Court of the United States in Katz v. United States, where the court imposed similar conditions on both physical and virtual searches, i.e. the requirement to obtain a warrant before conducting the search.

Also Read: India Is Falling Down the Facial Recognition Rabbit Hole

Indian courts should reject the hierarchisation between physical and virtual intrusion if it has to consider the question of permissibility of facial recognition technology under common law, in the future. This is because virtual intrusion through AFRS-enabled CCTVs can be equally invasive. AFRS enabled-CCTV footages are different from ordinary CCTVs. Data from a series of AFRS-enabled CCTVs, if processed in conjunction, could reveal a series of movements of an individual, enabling long-term surveillance. Such surveillance would be qualitatively distinct from isolated observances across unrelated CCTV footages. For instance, a series of trips to a bar, a gym and a bookie or a church can reveal significantly more about a person than a single visit viewed in isolation.

The problem with common law forming the bedrock of surveillance is that it seriously undermines legitimacy and foreseeability of the extent of surveillance we can be subjected to. 

This is particularly because law and order is a state subject and police practice varies across jurisdictions. To ensure that police use of AFRS is reasonably foreseeable and predictable, express statutory regulation which enables and regulates AFRS should be created. A proposed framework of this nature is the Facial Recognition Technology Warrant Act of 2019 introduced in the US. The Bill requires that sustained surveillance only be conducted for law enforcement purposes and in pursuance of a court order (unless it is impractical to do so). Combined with a robust data protection framework, such a framework would ensure some degree of predictability in its use. 

Siddharth Sonkar is a final year student of the National University of Juridical Sciences (NUJS), Kolkata. 

Delhi Police Is Now Using Facial Recognition Software to Screen ‘Habitual Protestors’

Narendra Modi’s Ramlila Maidan event on December 22 was the first political event where the Automated Facial Recognition System was used to screen the crowd.

New Delhi: The Delhi police is now using Automated Facial Recognition System (AFRS), a software it acquired in March 2018, to screen alleged “rabble-rousers and miscreants”. This includes those who have been protesting against the Citizenship (Amendment) Act (CAA) and National Register of Citizens (NRC).

According to a report in the Indian Express, Narendra Modi’s Ramlila Maidan event on December 22 was the first political event where this software was used to screen the crowd. This was also reportedly the first time the Delhi police used a set of facial images collected through filming protests at various spots in the capital through the years to identify “law and order suspects”. Before this event, the software has only been used thrice – twice at Independence Day parades and once at a Republic Day parade.

“This dataset of ‘select protesters’, said sources, was put to use for the first time to keep ‘miscreants who could raise slogans or banners’ out of the Prime Minister’s rally last Sunday,” the newspaper report says.

All the people who attended the Ramlila Maidan event had to pass through a metal detector gate where a camera sent a live feed of their faces to a control room set up at the spot, the live feed was matched with the facial dataset within five seconds.

The Delhi police had acquired AFRS following a Delhi high court order in a case related to missing children. It was supposed to be used to identify lost and found boys and girls by matching photos.

This comes at a time when there have been protests across the country, some of which have turned violent resulting in clashes with the police. At least 25 people have been killed in the police crackdown so far. The police in Delhi, UP, Karnataka and Bihar have been accused of using disproportionate force against protestors. There have also been allegations of brutality and human rights violation against the police and state authorities. Prominent elected representatives, including PM Modi and UP CM Yogi Adityanath, have blamed the violence on protestors.

There is palpable fear among activists and civil society members that this technology can be wrongfully used to profile dissenters and members of a particular community. Responding to such fears, a spokesperson of Delhi police told the Indian Express that, “such datasets are not perpetual and are revised periodically. Racial or religious profiling is never a relevant parameter while building these datasets”.

All the footage collected by Delhi police during protest demonstrations is now being fed to the AFRS, which extracts “identifiable faces” of the protesters to its dataset. An unnamed source told the Indian Express that after extraction, images are manually screened to identify and retain “habitual protesters” and “rowdy elements”.

Humans Can’t Watch All the Surveillance Cameras Out There, so Computers Are

Computers equipped with artificial intelligence video analytics software are able to monitor footage in real time, flag unusual activities, and identify faces in crowds.

Look up in most cities and it won’t take long to spot a camera. They’re attached to lampposts, mounted outside shop doors, and stationed on the corner of nearly every building. They’re mounted to the dashboard in police cars. Whether you even notice these things anymore, you know you’re constantly being filmed. But you might also assume that all the resulting footage is just sitting in a database, unviewed unless a crime is committed in the area of the camera. What human could watch all that video?

Most of the time, a human isn’t watching all the footage – but, increasingly, software is.
Computers equipped with artificial intelligence video analytics software are able to monitor footage in real time, flag unusual activities and identify faces in crowds. These machines don’t get tired, and they don’t need to be paid.

And this isn’t at all hypothetical. In New York City, the police department partnered with Microsoft in 2013 to network at least 6,000 surveillance cameras to its Domain Awareness System, which can scan video footage from across the city and flags signs of suspicious activity the police program it to look for, like a person’s clothing or hair color. Last October, Immigration and Customs Enforcement put out a request for vendors to provide a system that can scan videos for “triggers,” like photographs of people’s faces. For retail stores, video analytics startups are currently marketing the ability to spot potential shoplifters using facial recognition or body language alone.

New York City’s police department partnered with Microsoft in 2013 to network at least 6,000 surveillance cameras to its Domain Awareness System. Photo: Reuters

Rising demand

Demand is expected to increase. In 2018, the AI video analytics market was estimated to already be worth more than $3.2 billion, a value that’s projected to balloon to more than $8.5 billion in the next four years, according to research firm Markets and Markets. “Video is data. Data is valuable. And the value in video data is about to become accessible,” says Jay Stanley, a senior policy analyst at the American Civil Liberties Union who authored a report released earlier this month about video analytics technologies and the risks they pose when deployed without policies to prevent new forms of surveillance. For decades, surveillance cameras have produced more data then anyone’s been able to make use of, which is why video analytics systems are so appealing to police and data miners hoping to make use of all the footage that’s long been collected and left unanalysed.

One of the problems with this, Stanley writes, is that some of the purported uses of video analytics, like the ability to recognise someone’s emotional state, aren’t well-tested and are potentially bogus, but could still usher in new types of surveillance and ways to categorise people.

One company described in the report is Affectiva, which markets an array of cameras and sensors for ride-share and private car companies to put inside their vehicles. Affectiva claims its product “unobtrusively measures, in real time, complex and nuanced emotional and cognitive states from face and voice.”

Also Read: Will India’s Snooping State Survive Judicial Scrutiny?

Another company, Noldus, claims its emotion detection software can read up to six different facial expressions and their intensity levels – the company even has a product for classifying the emotional states of infants.

In 2014, when the Olympics were hosted in Sochi, Russia, officials deployed live video analytics from a company called VibraImage, which claimed to be able to read signs of agitation from faces in the crowd in order to detect potential security threats.

While in the US, this technology is primarily marketed for commercial uses, like for brands to detect if a customer is reacting positively to an ad campaign, it’s not a stretch to imagine that law enforcement may ask for emotion recognition in the future.

Challenges

Yet the idea that emotions manifest in fixed, observable states that can be categorised and labeled across diverse individuals has been challenged by psychologists and anthropologists alike, according to research published last year by the AI Now Institute.

Even when the applications offered by video analytics technologies are well tested, that doesn’t mean they’ve been found to work. One of the most popular uses of video analytics is the ability to identify a person captured in a moving image, commonly called facial recognition. Amazon has been working with the FBI and multiple police departments across the country to deploy its facial recognition software, Rekognition, which claims to be able to “process millions of photos a day” and to identify up to 100 people in a single image – a valuable tool for surveillance of large crowds, like at protests, in crowded department stores, or in subway stations.

Also Read: Home Ministry Allows 10 Central Agencies to Engage in Electronic Snooping

The problem with law enforcement using this tech is that it doesn’t work that well, especially for people with darker skin tones. An MIT study released earlier this year found that Rekognition misidentified darker-skinned women as men 31% of the time, yet made no mistakes for lighter-skinned men. The potential for injustice here is real: If police decide to approach someone based on data retrieved from facial recognition software and the software is wrong, the result could lead to unwarranted questioning or, worse, the misapplied use of force. Amazon said in a statement to the New York Times that it offers clear guidelines on how law enforcement can review possible false matches and that the company has not seen its software used “to infringe on citizens’ civil liberties.”

Policies to provide oversight and curb misuse

Though law enforcement and private businesses are already experimenting with video analytics systems, Stanley says the technology isn’t so ubiquitous yet that it’s too late to put policies in place to provide oversight, curb misuse, and in some cases, even prevent deployment. In May, for example, San Francisco passed a law banning the use of facial recognition by law enforcement. And in cities like Nashville, Oakland, and Seattle, officials are required to hold public meetings before adopting any new surveillance technologies. These policies are promising, but they’re nowhere near commonplace across the country.

Law enforcement and private businesses are already experimenting with video analytics systems. Photo: Flickr/Sheila Scarborough CC BY 2.0

A good first step to creating regulations around how video analytics are used by law enforcement, according to Ángel Díaz, a technology and civil rights lawyer at the Brennan Center for Justice, is requiring local police departments and public agencies to be transparent about the surveillance technologies they use. “Once you have mandated disclosure, it’s possible for the public to come in and have experts testify and have a public forum to discuss how these systems work and whether we want to be deploying them at all in the first place,” says Díaz.

One way to jump-start that conversation, Díaz says, is by requesting audits from inspector general offices at police departments, which people can call on to conduct audits of their local police department’s use of video analytics and other surveillance technologies. Once it’s clear what type of technologies are being used, it’s easier to create other mechanisms of accountability, like task forces for evaluating automated systems adopted by city agencies to help ensure they’re not reproducing biases.

Once a piece of technology is already in the field, it’s much harder to pass rules that restrict how it’s used, which is why it’s so important for communities concerned about over broad surveillance technologies to demand transparency and accountability now. We’re already being watched – the question now is whether we’re going to trust computers to do all the watching.

This piece was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.