Last week, Amazon made the unexpected move of calling for regulation on facial recognition. In a blog post published on Thursday, Michael Punke, VP of global public policy at Amazon Web Services, expressed support for a “national legislative framework that protects individual civil rights and ensures that governments are transparent in their use of facial recognition technology.”
Facial recognition is one of the fastest-growing areas of the artificial intelligence industry. It has drawn interest from both the public and private sector and is already worth billions of dollars. Amazon has been moving fast to establish itself as a leader in facial recognition technology, actively marketing its Rekognition service to different customers, including law enforcement agencies.
Amazon’s latest announcement comes as a break from its previous approach toward setting legal constraints on its technology.
The company faced backlash from employees and civil rights activists over technical problems and privacy concerns regarding its facial recognition service and the implications of making it available to law enforcement and the military. But unlike other tech giants—namely Google and Microsoft—Amazon was not moved by the criticism and seemed determined to pursue its facial recognition ambitions at full speed.
“We are going to continue to support the DoD, and I think we should,” Amazon CEO Jeff Bezos said at a tech conference in San Francisco in October. “One of the jobs of senior leadership is to make the right decision, even when it’s unpopular.”
So Punke’s announcement might look like a turnabout for the company. Amazon is not the first company to call for regulation of facial recognition. Microsoft President Brad Smith has been very vocal about the issue and has stressed on several occasions that regulatory oversight over the development and use of facial recognition.
At first glance, vendors of facial recognition technologies are moving out of their way by calling for measures that would put constraints on their businesses. But in reality, as the industry matures, regulation will be in the greater interests of the developers of facial recognition technology, especially big tech companies.
The benefits of facial recognition
Something that might have been overlooked in the haze caused by the controversial news surrounding facial recognition services is the real positive use cases that the technology offers. Punke makes sure those positive uses don’t go unnoticed.
“We’ve already seen the technology used to prevent human trafficking, reunite missing children with their parents, improve the physical security of a facility by automating access, and moderate offensive and illegal imagery posted online for removal,” Punke writes at the beginning of the post.
In a blog post published in December, Microsoft’s Brad Smith listed several specific instances where the use of modern facial recognition has helped improve lives. One of the most notable cases was in India, where authorities were able to trace 3,000 missing children by analyzing CCTV footage with facial recognition technology.
Smith names other positive uses for facial recognition, including the detection of genetic diseases (although in that case, the technology used is facial analysis, a technique that is very close to facial recognition).
Facial recognition is also useful in various settings to simplify actions that require the verification of a person’s identity, such as making payments or unlocking smartphones.
The privacy threats of using facial recognition
However, the benefits of facial recognition come with clear privacy tradeoffs. To be clear, facial recognition can only be used in settings where there’s already an available video feed, so some might argue that if you’re being filmed, you’ve already given up your privacy and adding facial recognition capability to online cameras won’t make a difference from a privacy standpoint.
But facial recognition adds new profiling capabilities that can serve very sinister purposes. Take the example of the missing children of India. The government already had access to millions of hours’ worth of surveillance video.
But comparing the faces of all the people appearing in those videos against the photos of the thousands of missing children is something that is beyond the capability of human operators.
In contrast, a well-trained facial recognition algorithm will be able to analyze all that video and flag faces of sought persons in a matter of hours. While in the case of India, facial recognition helped save the lives of missing children, in other settings it can become a tool for keeping tabs and controlling on specific groups of people, such as religious and ethnic minorities.
“When combined with ubiquitous cameras and massive computing power and storage in the cloud, a government could use facial recognition technology to enable continuous surveillance of specific individuals. It could follow anyone anywhere, or for that matter, everyone everywhere. It could do this at any time or even all the time. This use of facial recognition technology could unleash mass surveillance on an unprecedented scale,” Smith writes.
Therefore, while recording video is a privacy problem, having tools that could search through all that video with the speed of state-of-the-art facial recognition algorithms adds new concerns.
For instance, consider Amazon’s drive to put its facial recognition technology to smart home security cameras. Should this happen, people who pass by a Rekognition-enhanced camera will not only have their faces recorded by the camera, but will also have their metadata uploaded into Amazon’s cloud. This means the company will have a searchable directory of movements and locations linked to people’s profiles, which it can then use for advertising purposes (or more evil goals).
Part of the calls for regulation recommend transparency. Both government agencies and commercial organizations must clearly states when a camera is using facial recognition technology and to what extent it is using it, which includes the kind of data it collects, stores and analyzes.
“People deserve to know when this type of technology is being used, so they can ask questions and exercise some choice in the matter if they wish. Indeed, we believe this type of transparency is vital for building public knowledge and confidence in this technology,” Smith writes.
Amazon’s Punke also underlines the need for transparency and notice in both the public and commercial sector.
Vendors of facial recognition must also make sure their customers don’t use their technology to infringe on the rights of other people.
“Our customers are responsible for following the law in how they use the technology,” Punke writes in the AWS blog post.
In case they don’t take necessary measures to prevent their customers from making unethical use of their facial recognition technology, developers must be held to account by the law.
There are no standards on fair and ethical use of facial recognition
“In the two-plus years we’ve been offering Amazon Rekognition, we have not received a single report of misuse by law enforcement,” writes Punke. But the problem is, there’s no clear definition on the ethical and fair use of facial recognition technology.
“What does misuse mean when there are no rules on use versus misuse?” Clare Garvie, a fellow at Georgetown University’s Center on Privacy and Technology, told Wired in an interview.
For instance, in 2018, researchers from ACLU found that Rekognition was classifying at least 28 members of Congress as documented criminals. The researchers claimed that the facial recognition technology was less accurate when used on the images of non-white people, and it could lead to unfair decisions by law enforcement agencies. But Amazon dismissed the rebuke, arguing that the researchers had not correctly adjusted the accuracy threshold of its service.
Current facial recognition technologies use machine learning. When used on an image, they provide a face ID and a confidence score which indicates how probable it is that the image belongs to that person. By default, Amazon’s facial recognition technology considers any result with a confidence score of 80 percent or higher to be acceptable.
In response to the ACLU research, an Amazon spokesperson said the company had advised law enforcement to use a 95-percent threshold. In the blog post, Punke recommends, “law enforcement should use the recommended 99% confidence threshold.”
But the Washington County Sheriff’s Office in Oregon, which is one of the customers of Amazon’s facial recognition technology, told Gizmodo that it isn’t using confidence scores recommended by the company.
What this entire debate proves is that without proper regulatory guidelines on how to use facial recognition technology, law enforcement might be undermining its own efforts to make cities safer. Relying on a technology that will churn up too many false positives will result in the arrest of innocent people and a degradation of trust in facial recognition.
Should facial recognition be used to automate decisions?
One of the most contentious debates over the use of facial recognition is whether it should be allowed to make decisions without human oversight.
Should law enforcement automatically issue arrest warrants for people flagged by facial recognition systems? In his post, Punke clearly underlines the need for keeping a human in the loop.
“Facial recognition should not be used to make fully automated, final decisions that might result in a violation of a person’s civil rights. In these situations, human review of facial recognition results should be used to ensure rights are not violated,” Punke writes.
But without legal safeguards, government agencies might eventually hand over critical decision-making to AI algorithms. This was accentuated in the controversy surrounding Google’s development of computer vision algorithms for the Department of Defense (DoD).
In a signed petition to Google executives, 90 academics in artificial intelligence, ethics, and computer science warned that “these technologies are poised to become a basis for automated target recognition and autonomous weapon systems.”
The need for human supervision is especially important because like other technologies based on deep learning and neural networks, current facial recognition applications are prone to algorithmic bias. Several studies have shown that facial recognition and analysis technologies from Amazon, Microsoft, IBM and other providers show different levels of accuracy toward various groups of people.
Why tech giants stand to gain from regulation of facial recognition
There’s a general perception that regulation will hamper innovation. But in critical domains, regulation ensures responsible innovation while preventing damaging practices. For instance, in the automobile industry, regulations require all companies to abide by a set of safety standards. No manufacturer can lower the costs and price of its vehicles by removing or degrading vital parts such as brakes, airbags, chassis, etc. The law will prohibit such a car from being sold to consumers and driven on roads. In this case, regulation makes sure that companies can innovate responsibly while preventing bad actors from damaging the market by adopting unfair practices.
The same thing goes for the facial recognition industry. Without regulation, companies might sacrifice ethics, privacy and moral standards for the sake of clinching deals. “We don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success,” Microsoft’s Smith wrote.
Regulation surrounding facial recognition will make sure that companies can continue to innovate within a framework that ensures the rights of all people are respected and prevents bad actors from undermining the industry and market with questionable practices.
And therein lies the most important benefit to companies like Amazon and Microsoft. At the beginning of this post, we mentioned that at first glance, it seems that companies such as Microsoft and Amazon are going out of their way by calling for regulation of their technology.
But the truth is, regulation or not, those companies will be forced to abide by a set of ethical principles simply because of the sheer size of capital and market share they are sitting on. They will always be under scrutiny by lawmakers, rights activists, analysts, the media and others who are looking at them through the skeptical lens of Big Brother tech companies.
Google grudgingly relinquished its involvement in DoD’s Project Maven after it faced backlash from employees and AI experts. Microsoft also faced criticism from its employees because of a controversial contract with ICE.
Should big tech companies exit the game, startups and smaller companies, which are spared many of the ethical and moral burdens of the likes of Google, Amazon and Microsoft, will find it easier to compete for the lucrative deals that the AI and facial recognition industry offers. Law enforcement might also find it less troublesome to engage in contracts with a company that is less in the focus of public attention.
Regulation will level the ground (in favor of big tech companies), requiring developers and users of facial recognition to all abide by the same rules. Regulatory compliance costs money, and covering those costs will be easier for companies like Microsoft and Amazon than startups burning through seed and series A capital.
In many ways, big tech’s support for regulating facial recognition is like its support for net neutrality and GDPR rules.
At the end of the day, the move by big tech companies to regulate the facial recognition industry might be in total good faith. Or it might simply to make sure that the constraints that naturally bind them legally apply to all other companies that might compete for their share of the facial recognition market.