Research shows algorithms help people see and correct their biases

By | May 10, 2024

Algorithms are the basis of modern life. People rely on algorithmic recommendations to navigate deep catalogs and find the best movies, routes, information, products, people and investments. Because humans train algorithms on their own judgments (for example, algorithms that make recommendations on e-commerce and social media sites), algorithms learn and encode human biases.

Algorithmic recommendations exhibit bias toward popular choices and anger-provoking information, such as partisan news. At the societal level, algorithmic biases perpetuate and reinforce structural racial bias in the judicial system, gender discrimination in corporate hires, and wealth inequality in urban development.

Algorithmic bias can also be used to reduce human bias. Algorithms can reveal hidden structural biases in organizations. In a paper published in the Proceedings of the National Academy of Sciences, my colleagues and I found that algorithmic bias can help people better recognize and correct biases within themselves.

bias in the mirror

In nine experiments, Begüm Celikitutan, Romain Cadario, and I asked research participants to rate Uber drivers or Airbnb listings on their driving skills, trustworthiness, or likelihood of renting that listing. We gave participants relevant details, such as the number of trips they had taken, a description of the property, or a star rating. We also included irrelevant, biased information: a photo that revealed the drivers’ age, gender, and attractiveness, or a name that implied the homeowners were white or Black.

After participants made their ratings, we showed them one of two rating summaries: one showing their own ratings, and the other showing the ratings of an algorithm trained on their ratings. We explained to participants the bias feature that may have influenced these ratings; For example, Airbnb guests are less likely to rent from hosts with distinctively African American names. We then asked them to evaluate how much impact bias had on ratings in the summaries.

Whether participants considered the biasing influence of race, age, gender, or attractiveness, they saw more bias in the ratings made by the algorithms than in themselves. This algorithmic mirror effect revealed whether participants evaluated the ratings of real algorithms or whether we showed participants their own ratings and misleadingly told them that an algorithm made those ratings.

Participants saw more bias in the algorithms’ decisions than in their own decisions, even when we gave participants a cash bonus if their bias decisions matched the decisions of a different participant who saw the same decisions. The algorithmic mirror effect held even if participants were in the excluded category (e.g., identifying as female or Black).

Just as research participants were able to see biases in other people’s decisions, they were also able to see biases in algorithms trained on their own decisions. Additionally, participants were more likely to see the impact of racial bias in the algorithms’ decisions rather than their own; but they were just as likely to see the impact of defensible features like star ratings on the algorithms’ decisions and their own decisions. decisions.

bias blind spot

Because algorithms eliminate people’s biased blind spots, people see more of their biases in algorithms. It’s easier to see biases in other people’s decisions than in your own because you use different evidence to evaluate them.

When examining your decisions for bias, you look for evidence of conscious bias; You may have considered race, gender, age, status or other irrelevant characteristics when making your decision. You ignore and excuse bias in your decisions because you don’t have access to the relational mechanism that drives your intuitive decisions, where biases often arise. You might think: “I didn’t consider their race or gender when I hired them. “I hired them based on merit alone.”

When examining other people’s decisions for bias, you do not have access to the processes they use to make decisions. So in cases where bias is evident and harder to excuse, you scrutinize their decisions for bias. For example, you may find that they only hire white men.

Algorithms eliminate the bias blind spot because you see algorithms as you see other people rather than yourself. Algorithms’ decision-making processes are a black box, like other people’s opinions are inaccessible to you.

Participants in our study who were most likely to show a bias blind spot were more likely to see bias in the algorithm’s decisions than in their own decisions.

People also externalize bias in algorithms. Seeing bias in algorithms is less threatening than seeing bias in yourself, even if the algorithms are trained on your choices. People blame algorithms. Algorithms are trained based on human judgments, but people call the reflected bias “algorithmic bias.”

corrective lens

Our experiments show that people are also more likely to correct their biases reflected in algorithms. In a final experiment, we gave participants the chance to correct their ratings. We showed each participant their own ratings and linked them to an algorithm trained either on the participant or on their judgments.

Participants were more likely to correct ratings when they were attributed to an algorithm because they believed the ratings were more biased. As a result, final corrected ratings were less biased when attributed to an algorithm.

Algorithmic biases with harmful effects are well documented. Our findings show that algorithmic bias can be used for good. The first step to correcting bias is to recognize its impact and direction. Algorithms can improve our decision-making by being mirrors that reveal our biases.

This article is republished from The Conversation, an independent, nonprofit news organization providing facts and authoritative analysis to help you understand our complex world. Written by: Carey K. Morewedge, Boston University

Read more:

Carey K. Morewedge does not work for, consult for, own shares in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond her academic duties.

Leave a Reply

Your email address will not be published. Required fields are marked *