Confronting the risks of artificial intelligence

Here is a brief excerpt from an article written by Benjamin Cheatham, Kia Javanmardian, and Hamid Samandari for the McKinsey Quarterly, published by McKinsey & Company. To read the complete article, check out other resources, learn more about the firm, obtain subscription information, and register to receive email alerts, please click here.

To learn more about the McKinsey Quarterly, please click here.

*     *     *

With great power comes great responsibility. Organizations can mitigate the risks of applying artificial intelligence and advanced analytics by embracing three principles.
Artificial intelligence (AI) is proving to be a double-edged sword. While this can be said of most new technologies, both sides of the AI blade are far sharper, and neither is well understood.Consider first the positive. These technologies are starting to improve our lives in myriad ways, from simplifying our shopping to enhancing our healthcare experiences. Their value to businesses also has become undeniable: nearly 80 percent of executives at companies that are deploying AI recently told us that they’re already seeing moderate value from it.
Although the widespread use of AI in business is still in its infancy and questions remain open about the pace of progress, as well as the possibility of achieving the holy grail of “general intelligence,” the potential is enormous.
McKinsey Global Institute research suggests that by 2030, AI could deliver additional global economic output of $13 trillion per year.Yet even as AI generates consumer benefits and business value, it is also giving rise to a host of unwanted, and sometimes serious, consequences. And while we’re focusing on AI in this article, these knock-on effects (and the ways to prevent or mitigate them) apply equally to all advanced analytics. The most visible ones, which include privacy violations, discrimination, accidents, and manipulation of political systems, are more than enough to prompt caution. More concerning still are the consequences not yet known or experienced. Disastrous repercussions—including the loss of human life, if an AI medical algorithm goes wrong, or the compromise of national security, if an adversary feeds disinformation to a military AI system—are possible, and so are significant challenges for organizations, from reputational damage and revenue losses to regulatory backlash, criminal investigation, and diminished public trust.Because AI is a relatively new force in business, few leaders have had the opportunity to hone their intuition about the full scope of societal, organizational, and individual risks, or to develop a working knowledge of their associated drivers, which range from the data fed into AI systems to the operation of algorithmic models and the interactions between humans and machines. As a result, executives often overlook potential perils (“We’re not using AI in anything that could ‘blow up,’ like self-driving cars”) or overestimate an organization’s risk-mitigation capabilities (“We’ve been doing analytics for a long time, so we already have the right controls in place, and our practices are in line with those of our industry peers”). It’s also common for leaders to lump in AI risks with others owned by specialists in the IT and analytics organizations (“I trust my technical team; they’re doing everything possible to protect our customers and our company”).
Leaders hoping to avoid, or at least mitigate, unintended consequences need both to build their pattern-recognition skills with respect to AI risks and to engage the entire organization so that it is ready to embrace the power and the responsibility associated with AI. The level of effort required to identify and control for all key risks dramatically exceeds prevailing norms in most organizations. Making real progress demands a multidisciplinary approach involving leaders in the C-suite and across the company; experts in areas ranging from legal and risk to IT, security, and analytics; and managers who can ensure vigilance at the front lines.This article seeks to help by first illustrating a range of easy-to-overlook pitfalls. It then presents frameworks that will assist leaders in identifying their greatest risks and implementing the breadth and depth of nuanced controls required to sidestep them. Finally, it provides an early glimpse of some real-world efforts that are currently under way to tackle AI risks through the application of these approaches.

Before continuing, we want to underscore that our focus here is on first-order consequences that arise directly from the development of AI solutions, from their inadvertent or intentional misapplication, or from the mishandling of the data inputs that fuel them. There are other important consequences, among which is the much-discussed potential for widespread job losses in some industries due to AI-driven workplace automation. There also are second-order effects, such as the atrophy of skills (for example, the diagnostic skills of medical professionals) as AI systems grow in importance. These consequences will continue receiving attention as they grow in perceived importance but are beyond our scope here.

Understanding the risks and their drivers

When something goes wrong with AI, and the root cause of the problem comes to light, there is often a great deal of head shaking. With the benefit of hindsight, it seems unimaginable that no one saw it coming. But if you take a poll of well-placed executives about the next AI risk likely to appear, you’re unlikely to get any sort of a consensus.

Leaders hoping to shift their posture from hindsight to foresight need to better understand the types of risks they are taking on, their interdependencies, and their underlying causes. To help build that missing intuition, we describe below five pain points that can give rise to AI risks. The first three—data difficulties, technology troubles, and security snags—are related to what might be termed enablers of AI. The final two are linked with the algorithms and human–machine interactions that are central to the operation of the AI itself. Clearly, we are still in the early days of understanding what lies behind the risks we are taking on, whose nature and range we’ve also sought to catalog in Exhibit 1.

*     *     *

Here is a direct link to the complete article.

Benjamin Cheatham is a senior partner in McKinsey’s Philadelphia office and leads QuantumBlack, a McKinsey company, in North America; Kia Javanmardian is a senior partner in the Washington, DC, office; and Hamid Samandari is a senior partner in the New York office.

The authors wish to thank Roger Burkhardt, Liz Grennan, Nicolaus Henke, Pankaj Kumar, Marie-Claude Nadeau, Derek Waldron, and Olivia White for their contributions to this article.

 

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.