The Power of Precise Predictions

Power of Precise

Here is a brief excerpt from a recent “Gray Matter” column by Philip E. Tetlock and J. Peter Scoblic for The New York Times. To read the complete article, check out others, and obtain subscription information, please click here.

Photo Credit: Gérard DuBois

* * *

Is there a solution to this country’s polarized politics?

Consider the debate over the nuclear deal with Iran, which was one of the nastiest foreign policy fights in recent memory. There was apocalyptic rhetoric, multimillion-dollar lobbying on both sides and a near-party-line Senate vote. But in another respect, the dispute was hardly unique: Like all policy debates, it was, at its core, a contest between competing predictions.

Opponents of the deal predicted that the agreement would not prevent Iran from getting the bomb, would put Israel at greater risk and would further destabilize the region. The deal’s supporters forecast that it would stop (or at least delay) Iran from fielding a nuclear weapon, would increase security for the United States and Israel and would underscore American leadership.

The problem with such predictions is that it is difficult to square them with objective reality. Why? Because few of them are specific enough to be testable. Key terms are left vague and undefined. (What exactly does “underscore leadership” mean?) Hedge words like “might” or “could” are deployed freely. And forecasts frequently fail to include precise dates or time frames. Even the most emphatic declarations — like former Vice President Dick Cheney’s prediction that the deal “will lead to a nuclear-armed Iran” — can be too open-ended to disconfirm.

There is a familiar psychological mechanism at work here. One of us, Professor Tetlock, has been running lab studies since the early 1980s that show that if people expect that others will evaluate the accuracy of their judgments — that is, if people feel they will be held accountable for their views — then they tend to avoid cognitive pitfalls such as overconfidence and the failure to update beliefs in response to new evidence. Professor Tetlock and the psychologist Jennifer Lerner have demonstrated that accountability has this effect because it encourages people to pre-emptively think of ways in which they might be wrong — before others do it for them.

But when people make non-falsifiable predictions, they feel less accountable. After all, if a prediction can never be disproved, then it poses no reputational risk. That lack of accountability, in turn, encourages overconfidence and even more extreme predictions.

Non-falsifiable predictions thus undermine the quality of our discourse. They also impede our ability to improve policy, for if we can never judge whether a prediction is good or bad, we can never discern which ways of thinking about a problem are best.

The solution is straightforward: Replace vague forecasts with testable predictions.

* * *

Here is a direct link to the complete article.

Philip E. Tetlock, a professor at the University of Pennsylvania, is a co-author (with Dan Gardner) of Superforecasting: The Art and Science of Prediction. J. Peter Scoblic is a fellow with the International Security Program at New America and a doctoral student at Harvard Business School.

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.