Possible Minds: A book review by Bob Morris

Possible Minds: 25 Ways of Looking at AI
John Brockman, Editor
Penguin Press/An imprint of Penguin Random House (February 2019)

John Brockman: “new techologies = new perceptions,” some of which become new realities.

I agree with Brockman that “artificial Intelligence is today’s story — the story behind all other stories. It is the Second Coming and the Apocalypse at the same time: good AI versus evil AI.” In his latest book, he has assembled contributions from 25 knowledge leaders, pioneer thinkers who share their thoughts as well as (yes) their feelings about the emergence of AI, for better or worse.

A polymath himself, he asked the essayists to consider Wallace Stevens’ Zen-like poem, “Thirteen Ways of Looking at a Blackbird,” and, the parable of the blind men and an elephant. All of their essays include a “blackbird” in one form or another. Also, like the elephant, “AI is too big a topic for only one perspective, never mind the fact that no two people seem to see things the same way.”

There is a farmer’s market In or near the the downtown area of most major cities where a few merchants offer slices of fresh fruit as samples of their wares. In that same spirit, I now offer a selection of brief insights from eight contributors, the essence of each articulated by Brockman with precision and eloquence of the highest order:

o George Dyson: Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.

o Daniel C. Dennett: We don’t need artificial agents. We need intelligent tools.

o Frank Wilczek: The advantages of artificial over natural intelligence appear permanent, while the advantages of natural artificial intelligence, though substantial at present, appear transient.

o Steven Pinker: There is no law of complex systems that says that intelligent agents must turn into ruthless megalomaniacs.

o Tom Griffiths: Automated intellectual systems that will make good inferences about what people want must have good generative models for human behavior.

o Chris Anderson: Just because AI systems sometimes end up in local minima, don’t conclude that this makes them any less like life. Humans — indeed, probably all of life forms — are often stuck in local minima.

o Alison Gopnik: Looking at what children do may give programmers useful hints about directions for computer learning.

o Caroline A. Jones: The work of cybernetically inclined artists concerns the emergent behaviors of life that elude AI in its current condition.

I hasten to add that each of the other contributors — especially John Brockman — also could have been represented on the list of key insights. In fact, let’s have him share some final thoughts: “We used to think Earth was the center of the universe. Now we think we’re special because we have intelligence and nothing else does. I’m afraid the bad news is that that isn’t a distinction…Realizing that there isn’t a genuine distance between intelligence and mere computation leads you to imagine that future — the endpoint of our civilization as a box of a trillion souls, each of them essentially playing a video game, forever. What is the ‘purpose’ of that?”

What indeed….

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.