Trends

MIT researchers show how to detect and address AI bias without loss in accuracy

Bias in AI leads to poor search results or user experience for a predictive model deployed in social media, but it can seriously and negatively affect human lives when AI is used for things like health care, autonomous vehicles, criminal justice, or the predictive policing tactics used by law enforcement.
In the age of AI being deployed virtually everywhere, this could lead to ongoing systematic discrimination.
That’s why MIT Computer Science AI Lab (CSAIL) researchers have created a method to reduce bias in AI without reducing the accuracy of predictive results.
“We view this as a toolbox for helping machine learning engineers figure out what questions to ask of their data in order to diagnose why their systems may be making unfair predictions,” said MIT professor David Sontag in a statement shared with VentureBeat. The paper was written by Sontag together with Ph.D. student Irene Chen and postdoctoral associate Fredrik D. Johansson.
The key, Sontag said, is often to get more data from underrepresented groups. For example, the researchers found in one case an AI model was twice as likely to label women as low-income and men as high-income. By increasing the representation of women in the dataset by a factor of 10, the number of inaccurate results was reduced 40 percent.
Traditional methods may suggest randomization of datasets related to a majority population as a way to resolve unequal results for different populations, but this approach can mean a tradeoff for less predictive accuracy to achieve fairness for all populations.
“In this work, we argue that the fairness of predictions should be evaluated in context of the data, and that unfairness induced by inadequate samples sizes or unmeasured predictive variables should be addressed through data collection, rather than by constraining the model,” reads the paper titled “Why is my classifier discriminatory?”
Differences in predictive accuracy can sometimes be explained by a lack of data or unpredictable outcomes. The researchers suggest AI models be analyzed for model bias, model variance, and outcome noise before undergoing fairness criteria critiques.
“This exposes and separates the adverse impact of inadequate data collection and the choice of the model on fairness. The cost of fairness need not always be one of predictive accuracy, but one of investment in data collection and model development. In high-stakes applications, the benefits often outweigh the costs,” the paper reads.
Once these evaluations have taken place, the group of researchers suggest procedures for estimating the impact of collecting additional training samples, then clustering data to identify subpopulations getting unequal results to guide additional variable collection.
This approach was used to achieve equal results for income based on census data, text book reviews, and death rates of patients in critical care.
The results will be presented next month at Neural Information Processing Systems (NIPS) in Montreal.
As concern has grown in the past year over the potential of bias in AI producing inaccurate results that impact human lives, a number of tools and approaches have been introduced.
This spring, startup Pymetrics open-sourced its bias detection tool Audit AI, while in September, IBM launched an algorithmic bias detection cloud service, and Google introduced AI bias visualization with the What-If tool and TensorBoard.
Other best practices meant to reduce the potential for bias in AI include proposed factsheets for datasets from IBM, and datasheets for Datasets, an approach to sharing essential information about datasets used to train AI models, recommended by Microsoft Research’s Timnit Gebru and AI Now Institute cofounder Kate Crawford.
Source: VentureBeat
To Read Our Daily News Updates, Please Visit Inventiva Or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker