Why the Godfather of A.I. Fears What He’s Built

Here is a brief excerpt from an article by Joshua Rothman for The New Yorker. To read the complete article, check out others, and obtain subscription information, please click here.

Illustration by Daniel Liévano

* * *

Geoffrey Hinton has spent a lifetime teaching computers to learn. Now he worries that artificial brains are better than

“There’s a very general subgoal that helps with almost all goals: get more control,” Hinton said of A.I.s. “The research question is: how do you prevent them from ever wanting to take control? And nobody knows the answer.”

In your brain, neurons are arranged in networks big and small. With every action, with every thought, the networks change: neurons are included or excluded, and the connections between them strengthen or fade. This process goes on all the time—it’s happening now, as you read these words—and its scale is beyond imagining. You have some eighty billion neurons sharing a hundred trillion connections or more. Your skull contains a galaxy’s worth of constellations, always shifting.

Geoffrey Hinton, the computer scientist who is often called “the godfather of A.I.,” handed me a walking stick. “You’ll need one of these,” he said. Then he headed off along a path through the woods to the shore. It wound across a shaded clearing, past a pair of sheds, and then descended by stone steps to a small dock. “It’s slippery here,” Hinton warned, as we started down.

New knowledge incorporates itself into your existing networks in the form of subtle adjustments. Sometimes they’re temporary: if you meet a stranger at a party, his name might impress itself only briefly upon the networks in your memory. But they can also last a lifetime, if, say, that stranger becomes your spouse. Because new knowledge merges with old, what you know shapes what you learn. If someone at the party tells you about his trip to Amsterdam, the next day, at a museum, your networks may nudge you a little closer to the Vermeer. In this way, small changes create the possibility for profound transformations.

“We had a bonfire here,” Hinton said. We were on a ledge of rock jutting out into Ontario’s Georgian Bay, which stretches to the west into Lake Huron. Islands dotted the water; Hinton had bought this one in 2013, when he was sixty-five, after selling a three-person startup to Google for forty-four million dollars. Before that, he’d spent three decades as a computer-science professor at the University of Toronto—a leading figure in an unglamorous subfield known as neural networks, which was inspired by the way neurons are connected in the brain. Because artificial neural networks were only moderately successful at the tasks they undertook—image categorization, speech recognition, and so on—most researchers considered them to be at best mildly interesting, or at worst a waste of time. “Our neural nets just couldn’t do anything better than a child could,” Hinton recalled. In the nineteen-eighties, when he saw “The Terminator,” it didn’t bother him that Skynet, the movie’s world-destroying A.I., was a neural net; he was pleased to see the technology portrayed as promising.

From the small depression where the fire had been, cracks in the stone, created by the heat, radiated outward. Hinton, who is tall, slim, and English, poked the spot with his stick. A scientist through and through, he is always remarking on what is happening in the physical world: the lives of animals, the flow of currents in the bay, the geology of the island. “I put a mesh of rebar under the wood, so the air could get in, and it got hot enough that the metal actually went all soft,” he said, in a wondering tone. “That’s a real fire—something to be proud of!”

* * *

Here is a direct link to the complete article.

Joshua Rothman, the ideas editor of newyorker.com, has been at The New Yorker since 2012.

Posted in

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.