Trends

How to tackle AI bias for people with disabilities

Today is International Day of Persons with Disabilities. It’s an occasion to promote the well-being of people with disabilities in every aspect of life. AI-based systems are already making a difference, but they are not a panacea. We must be diligent in how we build AI models and adjust if and when things go askew.
In the wake of several examples of unwanted bias in AI systems, many AI developers are acutely aware of the need to treat marginalized groups fairly, especially for race and gender. One approach is to balance the data used for training AI models, so that all groups are represented appropriately. There are also many ways to check mathematically for bias against a protected group and make corrections.
But disability is one very important aspect of diversity that has been neglected. The World Health Organization estimates that 15 percent of people worldwide have some form of impairment that can lead to disability. Almost all of us will experience sensory, physical or cognitive disability in our lives. Whether permanent or temporary, this is a normal part of human experience that technology can and should accommodate.
That includes AI systems, but there’s a catch. Disability is different in two fundamental ways from other protected attributes like race and gender: extreme diversity and data privacy.

Extreme diversity

Disability is not a simple concept with a small number of possible values. It has many dimensions, varies in intensity and impact, and often changes over time. As defined by the United Nations Convention on the Rights of People with Disabilities, disability “results from the interaction between persons with impairments and attitudinal and environmental barriers that hinders their full and effective participation in society.”
As such, it depends on context and comes in many forms, including physical barriers, sensory barriers, and communication barriers. The issues faced by a visually impaired person in navigating a city are very different from those of someone in a wheelchair, and a blind person has different challenges from someone with low vision. What this means is that data describing a person with a disability may look unique. Achieving fairness by building balanced training data sets for AI systems, as AI developers do with other demographics, cannot easily be applied to the diverse world of disability.
One important consequence of having a disability is that it can lead us to do things in a different way, or to look or act differently.  As a result, disabled people may be outliers in the data, not fitting the patterns learned by machine learning. There is a risk that outlier individuals will not receive fair treatment from systems that rely on learned statistical norms.

Data privacy

Compounding the challenge is that many people have privacy concerns about sharing disability information. The Americans with Disabilities act prohibits employers from asking candidates about their disability status during the hiring process. This kind of “fairness through unawareness” aims to let candidates be evaluated based purely on their ability to do the job.
People with disabilities know from experience that revealing a disability can be risky. I recently listened to a group of students discussing the pros and cons of revealing their disabilities when applying for internships. One chooses not to disclose it, believing it will reduce their chances.  Another must reveal his disability so that accommodations can be provided for the application process. A third chooses to disclose by including relevant professional experience at disability organizations in her resume. She argues her disability is an important driver of her experience and talents, and this approach will filter out places where her disability would be seen as a negative.
These examples illustrate both the sensitivity of disability information and some of the reasons data used to train AI systems does not always contain disability information, but may still reflect the presence of a disability. For people with disabilities to contribute their own data to help test or train AI systems may be in the public good, but it comes at personal risk. Even if the data is anonymized, the unusual nature of a person’s situation may make them “re-identifiable.” Yet without disability information, existing methods of testing for and removing bias in AI models cannot be applied.

Ensuring fairness

To ensure AI-based systems are treating people with disabilities fairly, it is essential to include them in the development process. Developers must take the time to consider who the outliers might be, and who might be impacted by the solutions they are developing. For example, a voice-controlled service might impact people with speech impairment or deaf speakers whose voices are not well understood by today’s speech recognition systems. Likewise, an online assessment test based on test times might not be fair to people who use assistive technologies to access the test.
The best path ahead is to seek out the affected stakeholders and work with them towards a fair and equitable system. If we can identify and remove bias against people with disabilities from our technologies, we will be taking an important step towards creating a society that respects and upholds the human rights of us all.
Source: VentureBeat
To Read Our Daily News Updates, Please visit Inventiva or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker