Deep Learning: A Critical Appraisal

Here is a brief excerpt from an article (2017) by Gary Marcus for Cornell University’s IT community. I located it in the archives of the Cornell University Library. To check out other resources, please click here.
*     *     *
Although deep learning has historical roots going back decades, neither the term “deep learning” nor the approach was popular just over five years ago, when the field was reignited by papers such as Krizhevsky, Sutskever and Hinton’s now classic 2012 deep net model of Imagenet.
What has the field discovered in the five subsequent years? Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.
For most problems where deep learning has enabled transformationally better solutions (vision, speech), we’ve entered diminishing returns territory in 2016-2017.
‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said. Geoff Hinton, grandfather of deep learning, asks “Is deep learning approaching a wall?
Although deep learning has historical roots going back decades (Schmidhuber, 2015), it attracted relatively little notice until just over five years ago.
Before the year was out, deep learning made the front page of The New York Times, and it rapidly became the best known technique in artificial intelligence, by a wide margin. If the general idea of training neural networks with multiple layers was not new, it was, in part because of increases in computational power
and data, the first time that deep learning truly became practical.
Deep learning has since yielded numerous state of the art results, in domains such as speech recognition, image recognition , and language translation and plays a role in a wide swath of current AI applications. Corporations have invested billions of dollars fighting for deep learning talent. One prominent deep learning advocate, Andrew Ng, has gone so far to suggest that “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.” 
Yet deep learning may well be approaching a wall, much as I anticipated earlier, at beginning of the resurgence (Marcus, 2012), and as leading figures like Hinton (Sabour, Frosst, & Hinton, 2017) and Chollet (2017) have begun to imply in recent months. What exactly is deep learning, and what has its shown about the nature of intelligence? What can we expect it to do, and where might we expect it to break down? How close or far are we from “artificial general intelligence”, and a point at which machines show a human-like flexibility in solving unfamiliar problems? The purpose of this paper is both to temper some irrational exuberance and also to consider what we as a field might need to move forward.
This paper is written simultaneously for researchers in the field, and for a growing set of AI consumers with less technical background who may wish to understand where the field is headed. As such I will begin with a very brief, nontechnical introduction aimed at elucidating what deep learning systems do well and why (Section 2), before turning to an assessment of deep learning’s weaknesses (Section 3) and some fears that arise from misunderstandings about deep learning’s capabilities (Section 4), and closing with perspective on going forward (Section 5).
Deep learning is not likely to disappear, nor should it. But five years into the field’s resurgence seems like a good moment for a critical reflection, on what deep learning has and has not been able to achieve…thus far.
*     *     *
Here is a direct link to the complete article (PDF format).
Gary Marcus, scientist, bestselling author, entrepreneur, and AI contrarian, was CEO and Founder of the machine learning startup Geometric Intelligence, recently acquired by Uber.  As a Professor of Psychology and Neural Science at NYU, he has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, and artificial intelligence, often in leading journals such as Science and Nature.
Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.