Adding fairness to AI is crucial, and getting it right is difficult

By | March 19, 2024

AI’s capacity to process and analyze large amounts of data has revolutionized decision-making processes, making operations in healthcare, finance, criminal justice, and other sectors of society more efficient and, in many cases, more effective.

But this transformative power comes with an important responsibility: the need to ensure that these technologies are developed and used in a fair and equitable manner. In short, artificial intelligence needs to be fair.

The pursuit of justice in AI is not only an ethical imperative, but also a necessity to promote trust, inclusivity, and the responsible advancement of technology. But ensuring that AI is fair is a big challenge. Moreover, my research as a computer scientist studying artificial intelligence shows that attempts to ensure fairness in artificial intelligence may lead to unintended consequences.

Why is fairness important in artificial intelligence?

Fairness in artificial intelligence has emerged as a critical area of ​​focus for researchers, developers, and policymakers. It transcends technical achievement by addressing the ethical, social and legal dimensions of technology.

Ethical fairness is the cornerstone of building trust and acceptance of AI systems. People need to trust that AI decisions that affect their lives (such as hiring algorithms) are made fairly. AI systems that embody social justice can help address and reduce historical biases (for example, those against women and minorities) by promoting inclusivity. Legally, incorporating justice into AI systems helps align these systems with anti-discrimination laws and regulations around the world.

Injustice can arise from two primary sources: input data and algorithms. Research has shown that input data can perpetuate bias across various sectors of society. In recruiting, for example, algorithms that reflect societal biases or process data that lacks diversity can perpetuate “like me” biases. These biases favor candidates who resemble decision makers or are already in an organization. When biased data is then used to train a machine learning algorithm to assist the decision maker, the algorithm can propagate and even reinforce these biases.

Why is justice difficult in artificial intelligence?

Fairness is inherently subjective and influenced by cultural, social and personal perspectives. In the context of artificial intelligence, researchers, developers, and policymakers often translate fairness into the idea that algorithms should not perpetuate or exacerbate existing biases or inequalities.

But measuring fairness and incorporating it into AI systems is fraught with subjective decisions and technical challenges. Researchers and policymakers have proposed various definitions of justice, such as demographic equality, equality of opportunity, and individual justice.

These definitions include different mathematical formulations and underlying philosophies. They also often conflict, highlighting the difficulty of meeting all fairness criteria simultaneously in practice.

Moreover, justice cannot be reduced to a single measurement or guideline. Equality of opportunity covers a variety of aspects, including but not limited to equality of treatment and influence.

Undesirable effects on justice

The multifaceted nature of fairness means that AI systems need to be examined at every stage of their development cycle, from initial design and data collection to final deployment and ongoing evaluation. This review reveals another layer of complexity. AI systems are rarely deployed alone. They are often used as part of complex and important decision-making processes, such as making recommendations about hiring or allocating funds and resources, and are subject to many restrictions, including security and privacy.

Research my colleagues and I have conducted shows that constraints such as computational resources, hardware types, and privacy can significantly impact the fairness of AI systems. For example, the need for computational efficiency can lead to simplifications that inadvertently miss or misrepresent marginalized groups.

In our research on network pruning, a method for making complex machine learning models smaller and faster, we found that this process can unfairly impact certain groups. This is because pruning may not take into account how different groups are represented in the data and model, which can lead to biased results.

Similarly, techniques to preserve privacy, although crucial, may obscure data needed to identify and reduce bias or disproportionately affect outcomes for minorities. For example, when statistical agencies add noise to data to protect privacy, this can lead to unfair resource allocation because the added noise affects some groups more than others. This disproportionality can also distort decision-making processes based on this data, such as the allocation of resources to public services.

These constraints do not operate in isolation, but intersect in ways that increase their impact on justice. For example, when privacy measures increase biases in data, they can further exacerbate existing inequalities. This makes it important for AI development to have a comprehensive understanding and approach to both privacy and fairness.

The way forward

Making AI fair is not easy, and there is no one-size-fits-all solution. It requires a process of continuous learning, adaptation and collaboration. Given that bias is prevalent in society, I believe that people working in the field of artificial intelligence should understand that it is not possible to achieve perfect fairness and instead strive for continuous improvement.

This challenge requires rigorous research, thoughtful policymaking, and commitment to ethical practices. For AI to work, researchers, developers, and users will need to ensure that fairness considerations are built into all aspects of the AI ​​pipeline, from concept to data collection and algorithm design, to deployment and beyond.

This article is republished from The Conversation, an independent, nonprofit news organization providing facts and authoritative analysis to help you understand our complex world. Written by: Ferdinando Fioretto, University of Virginia

Read more:

Ferdinando Fioretto receives funding from the National Science Foundation, Google, and Amazon.

Leave a Reply

Your email address will not be published. Required fields are marked *