Trends

Using AI to program humans to behave better

Much attention has, rightfully, been given to how the AI industry might transmit existing negative biases into the myriad of artificially intelligent systems that are now being built. As has been pointed out in numerous articles and studies, we’re often entirely unaware of the biases that our data inherits, hence the risk of equally unconsciously porting these into any AI we develop.
Here’s how this can work: According to a recent study, names like “Brett” and “Allison” were found by a machine to be more similar to positive words, including words like “love” and “laughter.” Conversely, names like “Alonzo” and “Shaniqua” were more closely related to negative words, such as “cancer” and “failure.” These results were based on a particular type of analysis (embedded analysis) which showed that, to the computer, bias inhered in the data, or more visibly, in words. That’s right, over time, all of our biased human interactions and presumptions attach bias to individual words themselves.
But if we agree that some biases perpetuate existing, unacceptable behaviors (racism, sexism, ageism), then we also have to agree that there are desired behaviors we should design for. This suggests a more hopeful dimension to this story: we can proactively program our AI systems to reward behaviors like kindness, empathy, thoroughness, and fairness. We can make AI a force for good.
 
Bias typically arrives along with real world data; as long as our society exhibits negative biases, we’ll see these reflected in the data we collect. The most potent (and probably most unrealistic) way to change this state of affairs is to eliminate bias from the dataset. In other words, we all need to become better people overnight or in some very short period of time.
Fortunately, there’s another option. We can build AI that reinforces positive attributes a reality, today, through programmatic product design choices, which puts the burden on a few good (wo)men. I don’t say this naively, and I won’t pretend it’s easy. But it is possible.
AI systems used in recruiting or college admissions could be programmed to ignore gender and racial cues that tend to penalize women and minorities here in the U.S. In this case, the bias is NOT in the data, but in the human (aka customer), who is reviewing those college applications; his actions will reinforce his presuppositions and bias the data that comes out of that process.
Programming to combat this particular form of unconscious bias is by no means some hazy vision of the future. Today, the company Text.io uses AI to increase diversity in hiring by giving real-time feedback on job descriptions and offering suggestions for language that will expand the pool of qualified candidates.
Or take self-driving cars. Autonomous vehicles might reward good behavior; they could be programmed to roll right up to a jaywalkers and sound an alarm so as to discourage random street crossings in the future (though New Yorkers would certainly rebel ;-) Or self-driving cars could prevent people from opening the passenger door on traffic side, so they learn to get out properly and safely; they could stop if passengers throw trash out the window or reward people for leaving the cars as clean as they found them. You get the point.
AI could also reward forgiveness. We know from studies that people have a greater tolerance for mistakes when they’re made by humans. They tend to forgive humans for lapses of judgment (even ones with financial consequences) but “punish” (e.g. stop using) software when it makes the exact same mistake. And we know from internal datathat humans more readily attribute mistakes to machines even when a human is at fault. We could easily offer rewards, in the form of discounts or perks, to people who display curiosity and compassion rather than anger when they believe an AI autonomous agent has made a mistake.
Here at x.ai we’re wholly focused on getting our autonomous AI agents (Amy and Andrew Ingram) to schedule meetings efficiently and effectively. As we further develop the product, we’re likely to factor good behaviors into Amy and Andrew’s design. For instance, if someone routinely cancels or reschedules meetings at the last minute, Amy might insist that meetings with that person are scheduled at the host’s (our customer’s) office to protect their time and prevent wasted travel. By rewarding people for being on time and not cancelling meetings last minute, our AI assistants could nudge us to behave better.
We might also enable settings for teams that enforce work life balance. You could imagine Amy abiding a default 50-hour work week, since studies show productivity drops once you pass that 50-hour threshold. She could give managers warnings when team members are routinely scheduling outside of that parameter to help them help their team allocate their time better.
AI is emotionless but it’s not inherently neutral, fair or unbiased. The data we use to train these systems can perpetuate existing unacceptable behaviors. However, I do believe we can accelerate good behavior and eliminate many socially unacceptable biases through AI product design choices.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker