Tackling bias in artificial intelligence (and in humans)

Here is a brief excerpt from an article written by Jake Silberg and James Manyika for the McKinsey Quarterly, published by McKinsey & Company. To read the complete article, check out other resources, learn more about the firm, obtain subscription information, and register to receive email alerts, please click here.

To learn more about the McKinsey Quarterly, please click here.

*     *     *

AI has the potential to help humans make fairer decisions—but only if we carefully work toward fairness in AI systems as well.
The growing use of artificial intelligence in sensitive areas, including for hiring, criminal justice, and healthcare, has stirred a debate about bias and fairness. Yet human decision making in these and other domains can also be flawed, shaped by individual and societal biases that are often unconscious. Will AI’s decisions be less biased than human ones? Or will AI make these problems worse?In, Notes from the AI frontier: Tackling bias in AI (and in humans) (PDF–120KB), we provide an overview of where algorithms can help reduce disparities caused by human biases, and of where more human vigilance is needed to critically analyze the unfair biases that can become baked in and scaled by AI systems. This article, a shorter version of that piece, also highlights some of the research underway to address the challenges of bias in AI and suggests six pragmatic ways forward.Two opportunities present themselves in the debate. The first is the opportunity to use AI to identify and reduce the effect of human biases. The second is the opportunity to improve AI systems themselves, from how they leverage data to how they are developed, deployed, and used, to prevent them from perpetuating human and societal biases or creating bias and related challenges of their own. Realizing these opportunities will require collaboration across disciplines to further develop and implement technical improvements, operational practices, and ethical standards.

AI can help reduce bias, but it can also bake in and scale bias

Biases in how humans make decisions are well documented. Some researchers have highlighted how judges’ decisions can be unconsciously influenced by their own personal characteristics, while employers have been shown to grant interviews at different rates to candidates with identical resumes but with names considered to reflect different racial groups. Humans are also prone to misapplying information. For example, employers may review prospective employees’ credit histories in ways that can hurt minority groups, even though a definitive link between credit history and on-the-job behavior has not been established. Human decisions are also difficult to probe or review: people may lie about the factors they considered, or may not understand the factors that influenced their thinking, leaving room for unconscious bias.

In many cases, AI can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used.

In many cases, AI can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used. In addition, some evidence shows that algorithms can improve decision making, causing it to become fairer in the process. For example, Jon Kleinberg and others have shown that algorithms could help reduce racial disparities in the criminal justice system. Another study found that automated financial underwriting systems particularly benefit historically underserved applicants. Unlike human decisions, decisions made by AI could in principle (and increasingly in practice) be opened up, examined, and interrogated. To quote Andrew McAfee of MIT, “If you want the bias out, get the algorithms in.”

At the same time, extensive evidence suggests that AI models can embed human and societal biases and deploy them at scale. Julia Angwin and others at ProPublica have shown how COMPAS, used to predict recidivism in Broward County, Florida, incorrectly labeled African-American defendants as “high-risk” at nearly twice the rate it mislabeled white defendants. Recently, a technology company discontinued development of a hiring algorithm based on analyzing previous decisions after discovering that the algorithm penalized applicants from women’s colleges. Work by Joy Buolamwini and Timnit Gebru found error rates in facial analysis technologies differed by race and gender. In the “CEO image search,” only 11 percent of the top image results for “CEO” showed women, whereas women were 27 percent of US CEOs at the time.

Underlying data are often the source of bias

Underlying data rather than the algorithm itself are most often the main source of the issue. Models may be trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities. For example, word embeddings (a set of natural language processing techniques) trained on news articles may exhibit the gender stereotypes found in society.

Models may be trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities.

Bias can also be introduced into the data through how they are collected or selected for use. In criminal justice models, oversampling certain neighborhoods because they are overpoliced can result in recording more crime, which results in more policing.

Data generated by users can also create a feedback loop that leads to bias. In Latanya Sweeney’s research on racial differences in online ad targeting, searches for African-American-identifying names tended to result in more ads featuring the word “arrest” than searches for white-identifying names. Sweeney hypothesized that even if different versions of the ad copy—versions with and without “arrest”—were initially displayed equally, users may have clicked on different versions more frequently for different searches, leading the algorithm to display them more often.

A machine learning algorithm may also pick up on statistical correlations that are societally unacceptable or illegal. For example, if a mortgage lending model finds that older individuals have a higher likelihood of defaulting and reduces lending based on age, society and legal institutions may consider this to be illegal age discrimination.

In order to minimize bias, how do we define and measure fairness?

How should we codify definitions of fairness? Arvind Narayanan identified at least 21 different definitions of fairness and said that even that was “non-exhaustive.” Kate Crawford, co-director of the AI Now Institute at New York University, used the CEO image search mentioned earlier to highlight the complexities involved: how would we determine the “fair” percentage of women the algorithm should show? Is it the percentage of women CEOs we have today? Or might the “fair” number be 50 percent, even if the real world is not there yet? Much of the conversation about definitions has focused on individual fairness, or treating similar individuals similarly, and on group fairness—making the model’s predictions or outcomes equitable across groups, particularly for potentially vulnerable groups.

Work to define fairness has also revealed potential trade-offs between different definitions, or between fairness and other objectives. For example, Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, as well as Alexandra Chouldechova and others, have demonstrated that a model cannot conform to more than a few group fairness metrics at the same time, except under very specific conditions. This explains why the company that developed COMPAS scores claimed its system was unbiased because it satisfied “predictive parity,” but ProPublica found that it was biased because it did not demonstrate “balance for the false positives.”

Experts disagree on the best way to resolve these trade-offs. For example, some have suggested that setting different decision thresholds for different groups (such as the predicted score required to receive a loan) may achieve the best balance, particularly if we believe some of the underlying variables in the model may be biased. Others contend that maintaining a single threshold is fairer to all groups. As a result of these complexities, crafting a single, universal definition of fairness or a metric to measure it will probably never be possible. Instead, different metrics and standards will likely be required, depending on the use case and circumstances.

*     *     *

Here is a direct link to the complete article.

Jake Silberg is a fellow at the McKinsey Global Institute (MGI). James Manyika is the chairman of MGI and a senior partner at McKinsey & Company in the San Francisco office.

This article draws from remarks the authors prepared for a recent multidisciplinary symposium on ethics in AI hosted by DeepMind Ethics and Society. The authors wish to thank Dr. Silvia Chiappa, a research scientist at DeepMind, for her insights as well as for co-chairing the fairness and bias session at the symposium with James.

In addition, the authors would like to thank the following people for their input on the ideas in this article: Mustafa Suleyman and Haibo E at DeepMind; Margaret Mitchell at Google AI and Charina Chou at Google; Professor Barbara Grosz and Lily Hu at Harvard University; Mary L. Gray and Eric Horvitz at Microsoft Research; Professor Kate Crawford at New York University and Microsoft Research; and Professor Sendhil Mullainathan at the University of Chicago. They also wish to thank their McKinsey colleagues Tara Balakrishnan, Jacques Bughin, Michael Chui, Rita Chung, Daniel First, Peter Gumbel, Mehdi Miremadi, Brittany Presten, Vasiliki Stergiou, and Chris Wigley for their contributions.

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.