Here is a brief excerpt from an article written by Michael Chui, James Manyika, Mehdi Miremadi, Nicolaus Henke, Rita Chung, Pieter Nel, and Sankalp Mal for the McKinsey Quarterly, published by McKinsey & Company. To read the complete article, check out other resources, learn more about the firm, obtain subscription information, and register to receive email alerts, please click here.
To learn more about the McKinsey Quarterly, please click here.
* * *
An analysis of more than 400 use cases across 19 industries and nine business functions highlights the broad use and significant economic potential of advanced AI techniques.
Artificial intelligence (AI) stands out as a transformational technology of our digital age—and its practical application throughout the economy is growing apace. For this briefing, Notes from the AI frontier: Insights from hundreds of use cases (PDF–446KB), we mapped both traditional analytics and newer “deep learning” techniques and the problems they can solve to more than 400 specific use cases in companies and organizations. Drawing on McKinsey Global Institute research and the applied experience with AI of McKinsey Analytics, we assess both the practical applications and the economic potential of advanced AI techniques across industries and business functions. Our findings highlight the substantial potential of applying deep learning techniques to use cases across the economy, but we also see some continuing limitations and obstacles—along with future opportunities as the technologies continue their advance. Ultimately, the value of AI is not to be found in the models themselves, but in companies’ abilities to harness them.
It is important to highlight that, even as we see economic potential in the use of AI techniques, the use of data must always take into account concerns including data security, privacy, and potential issues of bias.
- Mapping AI techniques to problem types
- Insights from use cases
- Sizing the potential value of AI
- The road to impact and value
Mapping AI techniques to problem types
As artificial intelligence technologies advance, so does the definition of which techniques constitute AI. For the purposes of this briefing, we use AI as shorthand for deep learning techniques that use artificial neural networks. We also examined other machine learning techniques and traditional analytics techniques (Exhibit 1).
Neural networks are a subset of machine learning techniques. Essentially, they are AI systems based on simulating connected “neural units,” loosely modeling the way that neurons interact in the brain. Computational models inspired by neural connections have been studied since the 1940s and have returned to prominence as computer processing power has increased and large training data sets have been used to successfully analyze input data such as images, video, and speech. AI practitioners refer to these techniques as “deep learning,” since neural networks have many (“deep”) layers of simulated interconnected neurons.
We analyzed the applications and value of three neural network techniques:
- Feed forward neural networks: the simplest type of artificial neural network. In this architecture, information moves in only one direction, forward, from the input layer, through the “hidden” layers, to the output layer. There are no loops in the network. The first single-neuron network was proposed already in 1958 by AI pioneer Frank Rosenblatt. While the idea is not new, advances in computing power, training algorithms, and available data led to higher levels of performance than previously possible.
- Recurrent neural networks (RNNs): Artificial neural networks whose connections between neurons include loops, well-suited for processing sequences of inputs. In November 2016, Oxford University researchers reported that a system based on recurrent neural networks (and convolutional neural networks) had achieved 95 percent accuracy in reading lips, outperforming experienced human lip readers, who tested at 52 percent accuracy.
- Convolutional neural networks (CNNs): Artificial neural networks in which the connections between neural layers are inspired by the organization of the animal visual cortex, the portion of the brain that processes images, well suited for perceptual tasks.
For our use cases, we also considered two other techniques—generative adversarial networks (GANs) and reinforcement learning—but did not include them in our potential value assessment of AI, since they remain nascent techniques that are not yet widely applied.
Generative adversarial networks (GANs) use two neural networks contesting one other in a zero-sum game framework (thus “adversarial”). GANs can learn to mimic various distributions of data (for example text, speech, and images) and are therefore valuable in generating test datasets when these are not readily available.
Reinforcement learning is a subfield of machine learning in which systems are trained by receiving virtual “rewards” or “punishments”, essentially learning by trial and error. Google DeepMind has used reinforcement learning to develop systems that can play games, including video games and board games such as Go, better than human champions.
In a business setting, these analytic techniques can be applied to solve real-life problems. The most prevalent problem types are classification, continuous estimation and clustering. A list of problem types and their definitions is available in the sidebar.
* * *
Here is a direct link to the complete article.