Illustration Credit: Jiawen Chen
* * *
Four new books explore how AI can help—and hinder—our productivity: Co-Intelligence, by Ethan Mollick; The Singularity Is Nearer, by Ray Kurzweil; The Mind’s Mirror, by Daniela Rus and Gregory Mone; and Slow Productivity, by Cal Newport.
“Three sleepless nights”—that’s what Ethan Mollick, a Wharton associate professor, says a newcomer needs to spend experimenting with tools such as ChatGPT and Midjourney to get up to speed on the current state of generative artificial intelligence. But what’s next for this new technology? How will it evolve over time? And how will it change the way we work? Mollick’s book, Co-Intelligence: Living and Working with AI, and a plethora of other new AI-focused publications (more than 40 will be released in 2024, according to Amazon) seek to answer these questions.
Given the recency of this breakthrough, however, accurate predictions are challenging. As Mollick writes, “We have created [an AI] that has blown through both the Turing Test (Can a computer fool a human into thinking it is human?) and the Lovelace Test (Can a computer fool a human on creative tasks?) within a month of its invention….[Yet] it is not entirely clear why the AI can do all these things, even though we built the system….No one really knows where this is all heading, including me.”
Nonetheless, his book delivers on its promise to be a useful guide to the new, mysterious technology, perhaps most importantly by explaining how it has already become an indispensable tool for many professionals. Mollick cites one study in which Boston Consulting Group split equally talented consultants into two groups. One had access to a popular gen-AI tool and got fundamental training on it; the other did not. Both groups were tasked with 18 challenges designed to mimic typical assignments. “The AI-powered consultants were faster, and their work was considered more creative, better written, and more analytical,” he writes.
In the face of such evidence, Mollick proposes four principles for knowledge workers: First, “Always invite AI to the table”— a call to experiment with it in all projects. Second, “Be the human in the loop,” ensuring oversight of AI outputs to prevent “hallucinations.” Third, “Treat AI like a person”—or, more precisely, like an intelligent yet inexperienced intern requiring instruction. And finally, “Assume this is the worst AI you will use.”
That last principle has been foundational for apocalyptic visions of AI in the future. Mollick acknowledges concerns about scenarios in which AI makes all human workers obsolete, or even becomes so powerful that it eliminates humanity (to prevent us from switching it off). He explores calls for stringent regulation, including an open letter from dozens of AI experts urging an immediate moratorium on AI development.
* * *
Here is a direct link to the complete article.
Eben Harrell is a senior editor at Harvard Business Review.