Here is an excerpt from an interview of Cass Sunstein and Richard Thaler by Roberta Fusaro and Julia Sperling-Magro for the McKinsey Quarterly, published by McKinsey & Company. To read the complete article, check out others, learn more about the firm, and sign up for email alerts, please click here.
* * *
Since Harvard professor Cass Sunstein and University of Chicago professor Richard Thaler introduced the concept of nudging to the world, in 2008, about 400 “nudge units”—or behavioral-insights teams—have been established in public- and private-sector organizations around the world. Nudges are interventions, big and small, aimed at getting people to act in their own best interest. Health organizations, for example, have used nudges to educate citizens about COVID-19 testing and vaccination. Consumer-goods companies have used them to steer customers toward climate-friendly products and services. Indeed, nudging has become so widespread that Sunstein and Thaler decided to update their thinking and to capture it in the newly released Nudge: The Final Edition (Penguin Books, August 2021). In a recent conversation with McKinsey’s Julia Sperling-Magro and Roberta Fusaro, the authors reminded us what nudge and choice architecture are. They also considered how technology and other changes in business and society have altered the practice of nudging and the amount of “sludge” in decision making. An edited version of the conversation appears here.
McKinsey: For the uninitiated, what is nudging?
Cass Sunstein: A nudge is an intervention that maintains freedom of choice but steers people in a particular direction. A tax isn’t a nudge. A subsidy isn’t a nudge. A mandate isn’t a nudge. And a ban isn’t a nudge. A warning is a nudge: “If you swim at this beach, the current is high, and it might be dangerous.” You’re being nudged not to swim, but you can. When you’re given information about the number of fat calories in a cheeseburger, that is a nudge. If a utility company sends something two days before a bill is due, saying that “You should pay now, or you are going to incur a late fee,” that is a nudge. You can say no, but it’s probably not in your best interest to do so. Nudges help people deal with a fact about the human brain—which is that we have limited attention. The number of things that we can devote attention to in a day or an hour or a year is lower than the number of things we should devote attention to. A nudge can get us to pay attention.
McKinsey: How is nudging different now than it was, say, 13 years ago, when your book was originally published? What makes for a good nudge in 2021?
Cass Sunstein: The basic theory is similar, though I think we understand it better now than we did then—and I think we’ll understand it better in ten years than we do now. We know that good nudges still make the chooser’s life better, and bad nudges don’t. What we’re seeing more of now, however, is nudging to protect third parties. You might have a climate-change nudge where the basic goal isn’t to protect the chooser; it’s to reduce greenhouse-gas emissions. In Switzerland, for instance, people have been nudged to automatically enroll in clean-energy programs. If they don’t want to, they can opt out, although the “dirtier” program may be more expensive. That nudge is designed to protect people from climate change generally, not necessarily to protect individual choice makers.
McKinsey: Nudging is tied very closely to the concept of “choice architecture.” What is that? Can you remind us?
Cass Sunstein: Really, any situation where you’re making a choice has an architecture to it. The owner of a website may put certain things in a very large font—the things that the private or public institution really wants you to attend to and maybe choose—and keep certain things hidden in small print at the bottom. And it turns out that small differences in this kind of architecture can lead to large differences in social outcomes. If you have a choice architecture where people must opt in, for instance, the participation rate is a lot lower than if the architecture prompts them to opt out.
One example of that is a US program that is designed to help children get access to school meals. The kids are legally entitled to these meals if they’re poor. But a lot of their parents don’t sign them up, probably because it’s scary to figure out how or it’s confusing or it’s just a matter of time commitment, and the parents don’t have a lot of time. The government switched from an opt-in design to an opt-out design—if the school or the locality knows that you’re poor and you’re a child, you automatically get the meal. The idea was that this would not involve a big advertising campaign. It would be very simple. And at last count, 15 million children in the US are enjoying nutritious and tasty meals in school.
McKinsey: For all the good that nudges can do, there are also ethical concerns. How can you be sure people are using nudges in the right way?
Richard Thaler: I get this question all the time. Do we worry about how people are thinking about this concept? It’s been a concern, sure. For the past 13 years, I’ve been signing copies of the book with the note, “Nudge for good,” which was meant as a plea. But I don’t think bad people need our book to do bad things.
Cass Sunstein: In the book, we refer to a bill of rights for nudging. Nudges should satisfy certain constraints—that is, they should be transparent, not covert or hidden. They should be in the interests of the people who are being nudged and consistent with their values. They should be subject to political safeguards, in the sense that if the people don’t like them, they should be able to say, “We don’t want that one.” And they should be consistent with constitutional understandings in the relevant nation. We’re very focused on ensuring that nudges are compatible with human dignity. If you’re nudged and you think, “That was awful. Why did that happen? I’m sadder and poorer,” that’s an unethical nudge.
McKinsey: How have advances in technology changed the practice of nudging?
Cass Sunstein: Technology enables something we call smart disclosure. If you have a cell phone—most people do—or a credit card—most people do—you get information somewhere, somehow, about your usage. Under the rules of smart disclosure, there would be simple, easily accessible, machine-readable information about your own data. You could compare your current cell-phone usage with the usage in a previous period and, possibly, with other people’s usage, so long as everyone’s privacy is respected. With more information about your credit-card usage, you could see that something important is getting underfunded relative to other things and make better choices.
Richard Thaler: Or suppose your kid is allergic to peanuts. You’d like to buy things that don’t have peanuts. You could start picking up every package and scanning all the ingredients—hopefully, you have good eyesight or glasses. That’s a nuisance. But if you’re a member of a shoppers’ club or a supermarket, they know everything you’ve bought, right? If they could make the technology work right, you could go to the store’s website and download, with one click, a file that lists everything you’ve bought in the past six months. With one more click, you could send that file to another website, NoPeanuts.com, and they could filter it: “Don’t buy those 20 things; here are some suggested substitutes.” That’s smart disclosure. We should be able to do this for everything, for all our own data.
Also, there are connectivity devices, like fitness bands and smart watches, that allow people to nudge more efficiently and effectively. I was on a video call with a few academics and a company that is trying to help people deal with diabetes. We were discussing the use of glucose monitors that would be somewhere on your body, and maybe your phone starts beeping after the first bite of that ice-cream sundae. There are lots of ways an inobtrusive thing on your wrist can help you make better choices—even if it’s not always perfect.
* * *
Here is a direct link to the complete article.