Why businesses need explainable AI—and how to deliver it

Here is an excerpt from a “classic” article written by Liz GrennanAndreas KremerAlex Singla, and Peter Zipparo for the McKinsey Quarterly, published by McKinsey & Company. To read the complete article, check out others, learn more about the firm, and sign up for email alerts, please click here.

* * *

Businesses increasingly rely on artificial intelligence (AI) systems to make decisions that can significantly affect individual rights, human safety, and critical business operations. But how do these models derive their conclusions? What data do they use? And can we trust the results?

Addressing these questions is the essence of “explainability,” and getting it right is becoming essential. While many companies have begun adopting basic tools to understand how and why AI models render their insights, unlocking the full value of AI requires a comprehensive strategy. Our research finds that companies seeing the biggest bottom-line returns from AI—those that attribute at least 20 percent of EBIT to their use of AI—are more likely than others to follow best practices that enable explainability.1 Further, organizations that establish digital trust among consumers through practices such as making AI explainable are more likely to see their annual revenue and EBIT grow at rates of 10 percent or more.2

Even as explainability gains importance, it is becoming significantly harder. Modeling techniques that today power many AI applications, such as deep learning and neural networks, are inherently more difficult for humans to understand. For all the predictive insights AI can deliver, advanced machine learning engines often remain a black box. The solution isn’t simply finding better ways to convey how a system works; rather, it’s about creating tools and processes that can help even the deep expert understand the outcome and then explain it to others.

To shed light on these systems and meet the needs of customers, employees, and regulators, organizations need to master the fundamentals of explainability. Gaining that mastery requires establishing a governance framework, putting in place the right practices, and investing in the right set of tools.

What makes explainability challenging

Explainability is the capacity to express why an AI system reached a particular decision, recommendation, or prediction. Developing this capability requires understanding how the AI model operates and the types of data used to train it. That sounds simple enough, but the more sophisticated an AI system becomes, the harder it is to pinpoint exactly how it derived a particular insight. AI engines get “smarter” over time by continually ingesting data, gauging the predictive power of different algorithmic combinations, and updating the resulting model. They do all this at blazing speeds, sometimes delivering outputs within fractions of a second.

Disentangling a first-order insight and explaining how the AI went from A to B might be relatively easy. But as AI engines interpolate and reinterpolate data, the insight audit trail becomes harder to follow.

Complicating matters, different consumers of the AI system’s data have different explainability needs. A bank that uses an AI engine to support credit decisions will need to provide consumers who are denied a loan with a reason for that outcome. Loan officers and AI practitioners might need even more granular information to help them understand the risk factors and weightings used in rendering the decision to ensure the model is tuned optimally. And the risk function or diversity office may need to confirm that the data used in the AI engine are not biased against certain applicants. Regulators and other stakeholders also will have specific needs and interests.

Would you like to learn more about QuantumBlack, AI by McKinsey?

Five ways explainable AI can benefit organizations

Mastering explainability helps technology, business, and risk professionals in at least five ways (See exhibit):

  1. Increasing productivity. Techniques that enable explainability can more quickly reveal errors or areas for improvement, making it easier for machine learning operations (MLOps) teams tasked with supervising AI systems to monitor and maintain AI systems efficiently. As an example, understanding the specific features that lead to the model output helps technical teams confirm whether patterns identified by the model are broadly applicable and relevant to future predictions or instead reflect one-off or anomalous historical data.
  2. Building trust and adoption. Explainability is also crucial to building trust. Customers, regulators, and the public at large all need to feel confident that the AI models rendering consequential decisions are doing so in an accurate and fair way. Likewise, even the most cutting-edge AI systems will gather dust if intended users don’t understand the basis for the recommendations being supplied. Sales teams, for instance, are more apt to trust their gut over an AI application whose suggested next-best actions seem to come from a black box. Knowing why an AI application made its recommendation increases sales professionals’ confidence in following it.
  3. Surfacing new, value-generating interventions. Unpacking how a model works can also help companies surface business interventions that would otherwise remain hidden. In some cases, the deeper understanding into the why of a prediction can lead to even more value than the prediction or recommendation itself. For example, a prediction of customer churn in a certain segment can be helpful by itself, but an explanation of why the churn is likely can reveal the most effective ways for the business to intervene.For one auto insurer, using explainability tools such as SHAP values revealed how greater risk was associated with certain interactions between vehicle and driver attributes. The company used these insights to adjust its risk models, after which its performance improved significantly.
  4. Ensuring AI provides business value. When the technical team can explain how an AI system functions, the business team can confirm that the intended business objective is being met and spot situations where something was lost in translation. This ensures that an AI application is set up to deliver its expected value.
  5. Mitigating regulatory and other risks. Explainability helps organizations mitigate risks. AI systems that run afoul of ethical norms, even if inadvertently, can ignite intense public, media, and regulatory scrutiny. Legal and risk teams can use the explanation provided by the technical team, along with the intended business use case, to confirm the system complies with applicable laws and regulations and is aligned with internal company policies and values.In some sectors, explainability is a requirement. For example, a recent bulletin issued by the California Department of Insurance requires insurers to explain adverse actions taken based on complex algorithms.3 As use of AI grows, organizations can expect more rules concerning explainability. New regulations, such as the draft EU AI regulation, may contain specific explainability compliance steps. Even when not specifically mandated, companies will need to confirm that any tool used to render actions such as credit determinations comply with applicable antidiscrimination laws, as well as laws prohibiting unfair or deceptive practices.

 

* * *

Here is a direct link to the complete article.

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.