Trends

Google has completed over 100 ethical reviews of AI ‘projects, products, and deals’

In June, following in the footsteps of Microsoft, Facebook, and others, Google unveiled a set of seven principles intended to guide its work in artificial intelligence (AI). Accord to the Mountain View company, AI projects which it chooses to pursue must (1) be socially beneficial, (2) avoid creating or reinforcing bias, (3) be built and tested for safety, (4) be accountable to people, (5) incorporate privacy design principles, (6) uphold high standards of scientific excellence, and (7) be made available for uses that accord with all principles.
Today, six months later, it gave a progress update on its effort to implement those guidelines.
Kent Walker, senior vice president of global affairs at Google, said in a blog post that a formal review structure to assess new “projects, products and deals” has been established, and that more than 100 reviews have been completed so far. Some have resulted in decisions to modify research in visual speech recognition and to hold off on commercial offerings of technology like general-purpose facial recognition.
“Thoughtful decisions require careful and nuanced consideration of how the AI principles … should apply, how to make tradeoffs when principles come into conflict, and how to mitigate risks for a given circumstance,” Walker said. “Most of these cases … have aligned with the principles.”
Google’s AI ethics review team as it exists today consists of researchers, social scientists, ethicists, human rights specialists, policy and privacy advisors, legal experts, social scientists who handle initial assessments and “day-to-day operations,” and a second group of “senior experts” from a “range of disciplines” across Alphabet — Google’s parent company — who provide technological, functional, and application expertise. A council of senior executives navigates more “complex and difficult issues,” including decisions that affect Google’s products and technologies.
The eventual goal, Walker says, is to scale the decision-making framework across Google, and to rope in “experts from a variety of disciplines” and create an external advisory group to complement existing internal review processes.
“We’re committed to promoting thoughtful consideration of these important issues and appreciate the work of the many teams contributing to the review process, as we continue to refine our approach,” Walker wrote.
Google also said today that it’s embarked on several pedagogical efforts to spread awareness about the AI principles, including a pilot training course based on the Santa Clara University Markkula Center for Applied Ethics’ “Ethics in Technology Practice” project. Additionally, it’s hosted an AI Ethics speaker series covering topics such as bias in natural language processing (NLP) and the use of AI in criminal justice, and it added a module on fairness to its online Machine Learning Crash Course.
Google’s self-report comes weeks after the company tweaked Google Translate, its freely available language translation tool, to show both feminine and masculine translations for some languages, and after it blocked Smart Compose, a Gmail feature that automatically suggests sentences for users as they type, from suggesting gender-based pronouns.
Those examples are far from the company’s only gaffes. In 2015, it was forced to apologize when Google Photos’ image recognition component labeled a black couple as “gorillas.” And a year later, in response to public backlash, it altered Google Search’s autocomplete function after it suggested the anti-Semitic query “are jews evil” when users sought information about Jews.
More recently, Google has faced criticism over Project Maven, the controversial Pentagon research program that sought to use AI to improve object recognition in military drones. It contributed TensorFlow, its open source AI framework, to the Pentagon while under the Project Maven contract. And it also planned to build a “Google-earth-like” surveillance system that would allow Defense Department analysts and contractors to “click on” buildings, vehicles, people, large crowds, and landmarks, and “see everything associated with [them].”
Google’s participation prompted dozens of employees to resign and more than 4,000 others to sign an open opposition letter, which this summer led to the drafting of an internal ethics policy to guide Google’s involvement in future military projects.
To be fair, Google is not the only company that’s received criticism for controversial applications of AI.
This summer, Amazon seeded Rekognition, a cloud-based image analysis technology available through its Amazon Web Services division, to law enforcement in Orlando, Florida and the Washington County, Oregon Sheriff’s Office. In a test — the accuracy of which Amazon disputes — the American Civil Liberties Union demonstrated that Rekognition, when fed 25,000 mugshots from a “public source” and tasked with comparing them to official photos of Congressional members, misidentified 28 as criminals.
And in September, a report in The Intercept revealed that IBM worked with the New York City Police Department to develop a system that allowed officials to search for people by skin color, hair color, gender, age, and various facial features. Using “thousands” of photographs from roughly 50 cameras provided by the NYPD, its AI learned to identify clothing color and other bodily characteristics.
Source: VentureBeat
To Read Our Daily News Updates, Please Visit Inventiva Or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker