Trends

Facial recognition remains tempting but toxic for tech companies

In a blog post announcing support for the Asia Pacific AI for Social Good Research Network and highlighting Google’s efforts to use artificial intelligence (AI) to combat disease and natural disasters, Kent Walker, senior vice president of global affairs, wrote that Google wouldn’t offer a “general-purpose” facial recognition API through Google Cloud until the “challenges” had been “identif[ied] and address[ed].”
“Unlike some other companies, Google … [is] working through important technology and policy questions [regarding facial recognition],” Walker said. “Like many technologies with multiple uses, [it] … merits careful consideration to ensure its use is aligned with our principles and values, and avoids abuse and harmful outcomes.”
The Mountain View company’s caution comes at a time when it faces scrutiny about Project Dragonfly, an initiative to build a censored version of its search engine for the Chinese market, and shortly after it decided not to renew a contract to supply AI that analyzes drone footage to the U.S. Department of Defense. (More than 1,400 employees reportedly signed a petition against Dragonfly, 700 of whom made public their opposition.)
But it reflects broader concerns among some tech giants about facial recognition technology’s immaturity — and its potential to cause harm. Earlier this month at an event in Washington, D.C. hosted by the Brookings Institution, Microsoft president Brad Smith proposed that people should review the results of facial recognition in “high-stakes scenarios,” such as when it might restrict a person’s movements; that groups using facial recognition should comply with anti-discrimination laws regarding gender, ethnicity, and race; and that companies be “transparent” about AI’s limitations.
In keeping with those strictures, Smith said that Microsoft has historically turned down client requests to deploy facial recognition technology where the company has concluded that there are human rights risks. It also in June canceled a contract that would have seen it supply processing and AI tools to U.S. Immigration and Customs Enforcement (ICE).
“Technology is making possible a new type of mass surveillance. It is becoming possible for the state, for a government, to follow anyone anywhere,” Smith said. “If we fail to think these things through, we run the risk that we’re going to suddenly find ourselves in the year 2024 and our lives are going to look a little too much like they came out of the book ‘1984.’”
Richard Socher, Salesforce’s chief scientist, shares those anxieties. It’s partially why Salesforce doesn’t currently offer facial recognition capabilities through Einstein Vision and the Einstein Image Classification API, its computer vision services for object detection and identification, he told VentureBeat in an interview during the NeurIPS 2018 conference in Montreal this month.
“As soon as you start to make more and more important decisions based on [someone’s] face, you can do some terrible things,” he said. “AI will only make decisions that are as good as its training data.”

Blunders upon blunders

Not every company feels the same way, however.
This summer, Amazon seeded Rekognition, a cloud-based image analysis technology available through its Amazon Web Services division, to law enforcement in Orlando, Florida and the Washington County, Oregon Sheriff’s Office. The City of Orlando later decided to renew its agreement and pilot a facial recognition program involving volunteers from the city’s police force, and Washington County used it to build an app that lets deputies run scanned photos of suspected criminals through a database of 300,000 faces.
In a test — the accuracy of which Amazon disputes — the American Civil Liberties Union demonstrated that Rekognition, when fed 25,000 mugshots from a “public source” and tasked with comparing them to official photos of Congressional members, misidentified 28 as criminals. Alarmingly, a majority of the false matches — 38 percent — were people of color.
AWS general manager Matt Wood offered counterpoints in June, arguing that Rekognition was “materially benefiting” society by “inhibiting child exploitation … and building educational apps for children,” and by “enhancing security through multi-factor authentication, finding images more easily, or preventing package theft.”
In a separate blog post published in August, Amazon said that AWS customers like Marinus Analytics were using Rekognition to help find human trafficking victims and reunite them with their families, and that other organizations, such as nonprofit Thorn, were tapping it to find and rescue children who had been sexually abused.
“There has been no reported law enforcement abuse of Amazon Rekognition,” he wrote. “There have always been and will always be risks with new technology capabilities. Each organization choosing to employ technology must act responsibly or risk legal penalties and public condemnation. AWS takes its responsibilities seriously.”
Others have exercised less caution still.
In September, a report in The Intercept revealed that IBM worked with the New York City Police Department to develop a system that allowed officials to search for people by skin color, hair color, gender, age, and various facial features. Using “thousands” of photographs from roughly 50 cameras provided by the NYPD, its AI learned to identify clothing color and other bodily characteristics.
An IBM spokesperson said the system was only ever used for “evaluation purposes,” but IBM’s Intelligent Video Analytics 2.0 product, which was released in 2017, offers a somewhat similar body camera surveillance feature that automatically labels people by tags such as “Asian,” “Black,” and “White.”

Potential for bias

The ethical implications to which Socher alluded aside, a growing body of research casts doubt on the overall precision of facial recognition.
A study in 2012 showed that facial algorithms from vendor Cognitec performed 5 to 10 percent worse on African Americans than on Caucasians, and researchers in 2011 found that facial recognition models developed in China, Japan, and South Korea had difficulty distinguishing between Caucasian faces and East Asians. In February, researchers at the MIT Media Lab found that facial recognition made by Microsoft, IBM, and Chinese company Megvii misidentified gender in up to 7 percent of lighter-skinned females, up to 12 percent of darker-skinned males, and up to 35 percent in darker-skinned females.
Those are far from the only examples of algorithms gone awry. It was recently revealed that a system deployed by London’s Metropolitan Police produces as many as 49 false matches for every hit. During a House oversight committee hearing on facial recognition technologies last year, the U.S. Federal Bureau of Investigation admitted that the algorithms it uses to identify criminal suspects are wrong about 15 percent of the time. And a study conducted by researchers at the University of Virginia found that two prominent research-image collections — ImSitu and COCO, the latter of which is cosponsored by Facebook, Microsoft, and startup MightyAI — displayed gender bias in their depiction of sports, cooking, and other activities. (Images of shopping, for example, were linked to women, while coaching was associated with men.)
Perhaps most infamously of all, in 2015, a software engineer reported that Google Photos’ image classification algorithms identified African Americans as “gorillas.”
Even Rick Smith, CEO of Axon, one of the largest suppliers of body cameras in the U.S., was this summer quoted as saying that facial recognition isn’t yet accurate enough for law enforcement applications.
“[They aren’t] where they need to be to be making operational decisions off the facial recognition,” he said. “This is one where we think you don’t want to be premature and end up either where you have technical failures with disastrous outcomes or … there’s some unintended use case where it ends up being unacceptable publicly in terms of long-term use of the technology.”

Signs of progress

The past decade’s many blunders paint a depressing picture of facial recognition’s capabilities. But that’s not to suggest progress hasn’t been made toward more accurate, less prejudicial technology.
In June, working with experts in artificial intelligence (AI) fairness, Microsoft revised and expanded the datasets it uses to train Face API, a Microsoft Azure API that provides algorithms for detecting, recognizing, and analyzing human faces in images. With new data across skin tones, genders, and ages, it was able to reduce error rates for men and women with darker skin by up to 20 times, and by 9 times for women.
Meanwhile, Gfycat, a user-generated short video hosting startup based in San Francisco, said this year that it managed to improve its facial recognition algorithms’ accuracy on people of Asian descent by applying stricter detection thresholds.
An emerging class of algorithmic bias mitigation tools, meanwhile, promises to accelerate progress toward more impartial AI.
In May, Facebook announced Fairness Flow, which automatically warns if an algorithm is making an unfair judgment about a person based on his or her race, gender, or age. Accenture released a toolkit that automatically detects bias in AI algorithms and helps data scientists mitigate that bias. Microsoft launched a solution of its own in May, and in September, Google debuted the What-If Tool, a bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework.
IBM, not to be outdone, in the fall released AI Fairness 360, a cloud-based, fully automated suite that “continually provides [insights]” into how AI systems are making their decisions and recommends adjustments — such as algorithmic tweaks or counterbalancing data — that might lessen the impact of prejudice. And recent research from its Watson and Cloud Platforms group has focused on mitigating bias in AI models, specifically as they relate to facial recognition.
But there’s much work to be done, Smith says.
“Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way,” he wrote in a blog post earlier this year. “All tools can be used for good or ill. The more powerful the tool, the greater the benefit or damage it can cause … Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression.”
Source: VentureBeat
To Read Our Daily News Updates, Please Visit Inventiva Or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker