SINGAPORE: In Singapore, security cameras watch corridors of housing blocks and student hostels.
Cameras detect littering from high-rise buildings, illegal smoking and e-scooters on footpaths.
Singaporeans, it seems, are accustomed to being watched for security and safety, and to curb nuisances.
Now, it’s increasingly practical to use cameras that not only capture images but recognise faces.
Singapore is among nations experimenting with facial recognition at immigration checkpoints.
Next, Singapore must choose. It can weave the technology more widely into everyday life, or join those taking a wait-and-see approach.
Adopting the technology, along with regulations to maintain public trust, is the smarter approach.
Worldwide, in recent months, some have scrambled to bar facial recognition while others embrace it.
In January, the European Commission considered a five-year moratorium on facial recognition in public places so that the authorities could develop a risk management strategy. Google CEO Sundar Pichai expressed openness to the ban. The Commission ultimately rejected a ban.
Both San Francisco and Oakland, California – home to many who work in the tech industry – barred city government from using the technology last year. Then the whole state of California banned law enforcement from using it. Cambridge, Massachusetts – home to Harvard and MIT – just banned it.
Berlin also imposed a moratorium.
In December, the US dropped a plan to require facial scans of its citizens at immigration checkpoints. In February, two US senators introduced a bill that would impose a moratorium.
Meanwhile, in recent days, authorities in India announced trials of facial recognition on railways. Police in London and Moscow announced they will use it in public places to match passers-by to “wanted” lists in real-time.
Facial recognition is here to stay. A moratorium on it carries its own risk: falling behind other nations in the development of cutting-edge crime-fighting capabilities.
Instead, developing controls on the technology can prevent errors and abuses and maintain public trust.
One might think that a searchable international database of billions of photos would need central planning and decades to build.
The New York Times reported in January that over 600 American law enforcement agencies used the services of Clearview AI.
This tiny start-up scraped – that is, it automatically harvested – billions of publicly available images from all sorts of websites, including Facebook, Instagram, YouTube and Twitter.
When a user uploads a person's photo to the Clearview app, other images of the person appear. By following links to the sources, a user can often get the subject's name and more.
READ: Commentary: The uncomfortable choices US tech firms face in investing in booming China
Clearview’s scraping violates the terms of service of the social media sites. But a US court refused to block such scraping.
The Clearview case shows how available the technology is and how easy it is to build a massive database of images.
Artificial intelligence experts everywhere know how to design facial recognition technology, because, as Al Gidari of Stanford Law School says: “There is no monopoly on math”.
Those opting for a moratorium on facial recognition can rest assured that friends and foes around the world use it.
If others stand still, China looks set to dominate facial recognition. China uses it in subways, airports, and rail stations in the name of public safety and health.
To manage the spread of COVID-19, SenseTime, a leading Chinese AI company, unveiled a facial recognition product that incorporates thermal imaging to detect and identify people running a fever. It can scan up to 10 people per second, and has been deployed in public places in Beijing, Shanghai and Shengzhen.
But there's the question of overreach. Getting a mobile phone line in China requires facial scans. Limits on data-sharing between private and public sectors are unclear.
Others also have a head start. The US state of Florida began experimenting back in 2000.
London has for years been a pioneer. Brexit supporters argued that a reason to leave the EU was to escape its cautious approach to artificial intelligence.
RISKS AND SOLUTIONS
Those who favour going slowly on facial recognition say we must not be guinea pigs in experiments with error-prone technology.
MIT researchers published studies showing that commercially available facial recognition products made more errors in identifying the gender of people with light skin than dark skin. They made more errors in determining the gender of women than men.
In December 2019, a US government study found similar biases in most of the hundreds of the algorithms on the market.
The probable explanation: Algorithms were trained on more images of light-skinned people than dark-skinned people and more images of men than women.
If facial recognition makes more mistakes with some groups than others, using it could be viewed as discrimination. Public trust in it may erode. The risks are vivid in a city as diverse as Singapore.
The explanation for the technology’s bias also points to a solution. If it’s trained on more diverse data, it may perform better.
The way to improve the algorithms is not a moratorium, but to develop better algorithms and train them better.
Breezing through immigration checkpoints without presenting a document is an appealing vision. But the crime-fighting potential of facial recognition makes experimenting with it urgent.
The technology has been used to rescue victims of child sex abuse, to catch perpetrators of these and other crimes, and to identify bodies of murder victims.
The New York police commissioner has argued:
It would be an injustice to the people we serve if we policed our 21st-century city without using 21st-century technology.
Microsoft’s president Brad Smith said: “There is only one way at the end of the day to make technology better and that is to use it”.
As callous as it sounds, technological development proceeds through trial and error – through iteration. Automobile and air travel, and countless medical procedures, were once far riskier than we would tolerate now.
The way forward is to refine the technology, deploy it, and learn from mistakes. And set the rules.
REGULATION IS THE BEST PROTECTION
A compromise between standing still and proceeding unfettered is to regulate the technology to maintain trust.
In a deviation from the usual Silicon Valley approach – “move fast and break things” – both Google’s Pichai and Microsoft’s Smith have advocated AI regulation.
Simon Chesterman, dean of the National University of Singapore Faculty of Law, has wisely observed that it’s hard to anticipate, when a technology like facial recognition is in its infancy, what regulations are required.
That said, no matter how facial recognition develops, certain legal questions are inevitable. Answers are worth considering now.
False positives – saying your face matches one from a wanted list when you are not on the list – are an entirely foreseeable, dire problem.
Before a facial recognition technology is relied on to make arrests, it could be legally required to meet high reliability standards across demographic groups in field trials.
Laws could require that before an arrest, a person’s identity must be confirmed through multiple methods, not just a camera saying it’s a match.
Should facial recognition cameras be installed in most public places, people worry that they will be continuously tracked and lose anonymity.
A warrant could be required before law enforcement can use the cameras to monitor an individual as she goes about the city, or before stored footage is checked to see where she has been.
Exceptions could be made for emergencies. Rules can determine how long such data is kept and how securely.
Microsoft has backed requiring court orders to track individuals with facial recognition.
US law has precedents: It generally allows data collection in public, but some exceptions require a warrant.
People don’t like surprises about how their data is used. An everyday example: Many find it creepy to browse online for a pair of shoes and then get shoe ads for weeks.
Ideally, people want notification about – and they want to consent to – handling of their data. Law enforcement must sometimes forgo notification and consent to avoid tipping off the targets of investigations.
But the closer governments come to meeting those ideals, the more likely they are to maintain trust.
In Hong Kong and worldwide, some fear that facial recognition could be used to monitor government critics. They want reassurance about limits on its use.
Protocols for when government employees can access facial data may prevent abuses like a rogue officer tracking an ex-girlfriend.
The sharing of facial recognition data among government agencies could be regulated. Then, to give a hypothetical example, people won’t be surprised to find immigration photos are used to identify them in other contexts.
Rules can tell us when law enforcement can access our images from social networks. In response to the Times’ report on Clearview, New Jersey barred police use of the app.
STANDING STILL MEANS FALLING BEHIND
Standing still as rival nations develop technology means falling behind.
Harnessing technological potential – without alienating people or depriving them of their rights – requires developing regulations along with the technology.
Trust in the authorities is high in Singapore. Clear rules can help maintain and strengthen that.
Dr Mark Cenite is a principal lecturer at the Wee Kim Wee School of Communication & Information at Nanyang Technological University, where he teaches courses on communication law and artificial intelligence and the law.