When Your Boss Wears Metal Pants

 

Here is an excerpt from an article written by Walter Frick for Harvard Business Review and the HBR Blog Network. To read the complete article, check out the wealth of free resources, obtain subscription information, and receive HBR email alerts, please click here.

Artwork Credit: Gordon Bennett, Tenna, 2006, wood, metal, Bakelite, glass, plastic, rubber, paint; Photography: Lucas Zarebinski

*      *      *

At a 2013 robotics conference the MIT researcher Kate Darling invited attendees to play with animatronic toy dinosaurs called Pleos, which are about the size of a Chihuahua. The participants were told to name their robots and interact with them. They quickly learned that their Pleos could communicate: The dinos made it clear through gestures and facial expressions that they liked to be petted and didn’t like to be picked up by the tail. After an hour, Darling gave the participants a break. When they returned, she handed out knives and hatchets and asked them to torture and dismember their Pleos.

Darling was ready for a bit of resistance, but she was surprised by the group’s uniform refusal to harm the robots. Some participants went as far as shielding the Pleos with their bodies so that no one could hurt them. “We respond to social cues from these lifelike machines,” she concluded in a 2013 lecture, “even if we know that they’re not real.”

This insight will shape the next wave of automation. As Erik Brynjolfsson and Andrew McAfee describe in their book The Second Machine Age, “thinking machines”—from autonomous robots that can quickly learn new tasks on the manufacturing floor to software that can evaluate job applicants or recommend a corporate strategy—are coming to the workplace and may create enormous value for businesses and society. (See the interview with Brynjolfsson and McAfee in this issue.) But although technological constraints are dissolving, social ones remain. How can you persuade your team to trust artificial intelligence? Or to accept a robot as a member—or even as a manager? If you replace that robot, will morale suffer?

Answering these questions requires an understanding of how humans will work with and relate to thinking machines. A growing body of research is expanding our knowledge, providing essential insights into how such collaborations can get work done. As these machines evolve from tools to teammates, one thing is clear: Accepting them will be more than a matter of simply adopting new technology.

When We Don’t Trust Algorithms—and When We Do

The first challenge in working with thinking machines is recognizing that they often know more than we do. Consider this 2014 finding: Researchers from Wharton ran a series of experiments in which participants were financially rewarded for good predictions and could either go with their own judgment or defer to an algorithm to make those predictions. For example, in one experiment they were shown admissions data for a group of past MBA students and asked to estimate how well each student had performed during the program. Most people preferred to go with their gut rather than defer to the algorithm’s estimates.

This phenomenon is called “algorithm avoidance,” and it has been documented in many other studies. Whether they’re diagnosing patients or forecasting political outcomes, people consistently prefer human judgment—their own or someone else’s—to algorithms, and as a result they often make worse decisions. The message for managers is that helping humans to trust thinking machines will be essential.

Unfortunately, simply showing people how well an algorithm performs doesn’t make them trust it. When the Wharton researchers let participants see their guesses, the algorithm’s, and the correct answers, the participants recognized that the algorithm usually performed better. But seeing the results also meant seeing the algorithm’s errors, which affected trust. “People lose confidence in algorithms after they’ve seen them err,” says Berkeley Dietvorst, one of the researchers. Even though the humans were wrong more often than the algorithm was, he says, “people don’t lose confidence in themselves.” In other words, we seem to hold mistakes against an algorithm more than we would against a human being. According to Dietvorst, that’s because we believe that human judgment can improve, but we think (falsely) that an algorithm can’t.

Algorithm avoidance may be even more pronounced for work that we perceive as more sophisticated or instinctive than number crunching. Researchers from Northwestern’s Kellogg School and Harvard Business School asked workers on the crowdsourcing site Mechanical Turk to complete a variety of tasks; some were told that the tasks required “cognition” and “analytical reasoning,” while others were told that they required “feeling” and “emotion processing.” Then the participants were asked whether they would be comfortable if this sort of work was outsourced to machines. Those who had been told that the work was emotional were far more disturbed by the suggestion than those who had been told it was analytical. “Thinking is almost like doing math,” concludes Michael Norton, of HBS, one of the study’s authors. “And it’s OK for robots to do math. But it’s not OK for robots to feel things, because then they’re too close to being human.”

*      *      *

Here is a direct link to the complete article.

Walter Frick is a senior editor at Harvard Business Review.

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.