If the past half-century of American political history has taught us anything, it’s that we can’t possibly know the consequences of bombing — or not bombing — Iran.
BY FRANCIS J. GAVIN AND JAMES B. STEINBERG | FEBRUARY 14, 2012
In his State of the Union address, President Barack Obama told the world, “America is determined to prevent Iran from getting a nuclear weapon, and I will take no options off the table to achieve that goal.” The decisions the U.S. president must make to attain this end are extraordinarily difficult, and whatever policy he chooses will have a profound and lasting effect on global politics and U.S. foreign policy. For some, the answers lie in history. Yet the best response to the Iranian threat may emerge not by looking to the past but by transforming the way experts and policymakers interact.
The decision on the table is remarkably complex: Should the United States launch a preventive strike against Iran’s nuclear facilities or encourage its Israeli allies to do so? To answer this question, one would need to, at a minimum, imagine and make judgments on plausible scenarios that could emerge from each choice. If the United States chose not to bomb Iran, would countries in the region eschew their own nuclear weapons and work with the United States to balance against and contain a nuclear Iran? Or would Iran’s nuclear capability drive neighboring states to “bandwagon” and ally with Iran or even seek their own nuclear weapons, undermining U.S. influence while destabilizing the region? And if the United States did successfully strike, what would be the chances that such military action would lead to an overthrow of the regime and its replacement with a government both friendly to the West and willing to forgo nuclear weapons? Or could a military strike provide a lifeline to an unpopular regime, inflame anti-American sentiment throughout the region, and unleash a wider military conflagration? And how would other global powers, such as China and Russia, react to these scenarios?
Based on our experiences — one of us a former senior policymaker, the other a historian of U.S. foreign policy — we are convinced that the “right” answer, but the one you will never read on blogs or hear on any cable news network, is that we simply cannot know ahead of time, with any degree of certainty, what the optimal policy will turn out to be. Why? Even if forecasters could provide probabilities about the likelihood of a narrow, specific event, it is simply beyond the capacity of human foresight to make confident predictions about the short- and long-term global consequences of a military strike against Iran.
In fact, as Philip Tetlock demonstrated in Expert Political Judgment, a 20-year study that looked at over 80,000 forecasts about world affairs, self-proclaimed authorities are no better at making accurate predictions than monkeys throwing darts at a dart board, and they are rarely held accountable for their errors. (According to Tetlock’s research, knowing a lot about an issue can actually make you a worse political forecaster than knowing very little.) Policymakers and elected officials, on the other hand, not only face public condemnation and the potential loss of their jobs if a decision turns out poorly, but they also carry the often heavy personal burden of responsibility for a failed policy. Understanding the different environments in which the expert and decision-maker operate is critical to understanding why expert ideas have less influence on policymaking than might be ideal.
This gulf is tragic, as there is much each world could learn from the other. We believe that if different types of experts — the best strategists and historians, for example — were brought together with statesmen in an environment that encouraged honest debate and collaboration and not point-scoring, where participants were encouraged to acknowledge how little anyone can actually know about the future effects of U.S. actions, the possibility to achieve both greater coherence and greater humility in the U.S. foreign-policymaking process would be greatly enhanced.
In such an environment, both camps might be tempted to explore the past to find examples of policies that can guide their decision-making. Although at first blush this seems wise, it is not fail-safe. And the deliberations over Iran provide a case in point.
Four decades ago, historian Ernest May warned against the tendency of policymakers and analysts to employ simple but misleading historical analogies in justifying difficult policies. Would allowing the aggressive, dangerous regime in Iran to acquire nuclear weapons be akin to another Munich — the wartime conference at which British Prime Minister Neville Chamberlain infamously capitulated to Nazi leader Adolf Hitler’s outrageous demands? Or would a dangerous military action halfway across the world bog America down in another Vietnam — a quagmire of a war that saps American blood and treasure and is not justified by national interest? In both cases, the simplistic use of lessons from the past distorts more than it reveals. There is no guarantee that using a more recent historical incident — for example, the erroneous intelligence about weapons of mass destruction in Iraq that led to an eight-year, trillion-dollar U.S. military intervention — would be any more helpful in making policy toward Iran.
Even more sophisticated and nuanced uses of history are not without their difficulties. When thinking about the consequences of a nuclear-armed Iran, some historians have pointed to how Lyndon B. Johnson’s administration responded to China’s nuclearization in October 1964. After weighing the potential benefits and costs of a preventive strike, the United States accepted and actually downplayed the significance of China’s nuclear capability. Mao’s China — which had been reckless abroad and ruthless at home — did not become more dangerous as an atomic power. In fact, in less than a decade after its nuclear test, China had become a de facto ally of the United States and a crucial partner in the Cold War rivalry with the Soviet Union. It is hard to imagine such an alliance if the United States had decided to strike in 1964.
Does this argue against striking Iran? Not necessarily. The Johnson administration’s decision not to strike China can only be understood in a larger and long-since forgotten context: an important shift in U.S. strategy aimed at managing the complex, interconnected issues of global nuclear proliferation, relations with the Soviet Union, the war in Southeast Asia, and the political and military status of Germany.
What is often forgotten in the story is that the same policymakers who eschewed preventive strikes against China in the fall of 1964 made several other related decisions they considered even more momentous. First, they made a bold decision to work with their Cold War adversary, the Soviet Union, to aggressively pursue a global nuclear nonproliferation regime. Most controversially, this policy shift included prohibiting some of the United States’ closest allies from acquiring atomic weapons. Many experts both within and outside the U.S. government worried this policy shift could be a potentially catastrophic mistake. It was foolish, many argued, to think cooperation with the Soviets was possible, nor was it prudent to try to prevent sovereign states, particularly friends of the United States, from possessing their own deterrent. Denying modern weapons to West Germany, some experts predicted, could lead to a resurgence of nationalism and even militarism, as it had during the interwar period. In the end, U.S. policies to slow the spread of nuclear weapons were quite effective, as there are far few nuclear states in the world today than anyone in 1964 would have predicted. Furthermore, the most alarming forecasts about how countries like West Germany and Japan would react to their non-nuclear status were, fortunately, wildly off the mark.
The fall of 1964 also saw these same policymakers decide to escalate U.S. military efforts in Vietnam, in part to demonstrate to non-nuclear countries — Australia, India, Indonesia, Japan, South Korea, Taiwan, and, yes, West Germany — that the United States would defend vulnerable countries, even if they were threatened by a nuclear-armed state or its proxy, in this case China and North Vietnam. As Henry Rowen, assistant defense secretary for international security affairs, wrote at the time, “A U.S. defeat in Southeast Asia may come to be attributed in part to the unwillingness of the U.S. to take on North Vietnam supported by a China that now has the bomb.” Walt Rostow, the U.S. State Department’s policy planning director, argued that the Johnson administration could make “U.S. military power sufficiently relevant to the situation in South-east Asia” to eliminate the impulse of states in the region to acquire their own atomic weapons. If the United States abandoned South Vietnam, it was feared, America’s allies might lose faith in the country’s promises to protect them and respond by seeking their own nuclear weapons. A nuclear tipping point that might start with Japan could spread throughout East Asia to include Australia, Indonesia, and South Korea. Unchecked, proliferation pressures could move to other regions of the world, even to West Germany’s nuclearization, threatening the stability of Central Europe.
Examined on their own merits, two of the policies — the decision not to launch a preventive strike against China and the decision to cooperate with the Soviet Union on limiting the spread of nuclear weapons — might be judged great successes, while the third — the U.S. military escalation in Vietnam — is seen as a disaster. But can they really be examined apart from one another? Decision-makers worried that if China were struck, cooperation with the Soviet Union on creating a robust nuclear nonproliferation regime could have been foreclosed. But if the United States stood back while the South Vietnamese government collapsed, might states in the region (and the wider world) have interpreted U.S. policy as a retreat caused by China’s atomic detonation? Wouldn’t those under the protection of the United States — including West Germany — worry that in light of the circumstances they would need to guarantee their own security, even if it meant acquiring their own nuclear weapons?
If Vietnam is understood at least in part as a function of the Johnson administration’s successful efforts to encourage nuclear nonproliferation, seek détente and cooperation with the Soviets, and manage the German question, the policy — if still disastrous in its consequences — makes more sense. The difficulty inherent in assessing U.S. foreign policy is made clear by the fact that all three policies were crafted by the same policymakers in the same administration at the same time. The point here is not to judge any of these decisions, justify the war in Vietnam (quite the contrary), or even to accept this historical interpretation of the events of late 1964, but only to highlight how misleading it can be to cherry-pick particular policies without a greater understanding of the complex, horizontal connections between seemingly unrelated issues — linkages that are rarely recognized by those outside the world of the top decision-makers.
Circling back to U.S. deliberations over a nuclear Iran, there are other, interrelated policies, both in the Middle East and worldwide, that would be enormously influenced by a U.S. decision to strike or not strike. While pundits can examine the issue in isolation, policymakers have to think about how their decisions will reverberate over time and on issues seemingly unrelated to the theocracy in Tehran, such as global energy prices, the war in Afghanistan, the Israeli-Palestinian peace process, North Korea’s nuclear capacity, the strength of the global nuclear nonproliferation regime, the credibility of America’s promise to protect its allies with its nuclear weapons, relations with China and Russia, and the trajectory of the Arab Spring, to name a few.
Reading most experts, one might think that effective foreign policy consists of choosing between a series of discrete, binary choices and assigning probabilities based on either clear and parsimonious (if disputed) laws of international politics or facile analogies with past circumstances. The truth, as every experienced policymaker knows, is that there are rarely simple solutions when facing radical uncertainty in a complex international environment. This explains why policymakers often prefer to “muddle through,” buy time, or seek a compromise between extreme policy options, if only to decrease the downside risk of any decision. While these “second best” policies are the very positions most likely to draw fire from pundits, they are often less likely to lead to disaster than the bold but untested recommendations of prominent experts.
Is there a way that experts could contribute more constructively to policymaking? During a recent workshop hosted by the University of Texas, historians, strategists, and current and former statesmen gathered to find answers. One big idea emerged: singular theories, models, and historical analogies, in isolation and unchallenged, are of little value to policymakers. But various theories, models, and histories taken together and tailored to the realities faced by policymakers could potentially provide quite a bit of insight.
How? Imagine a group of experts and statesman meeting off the record, temporarily suspending their desire to predict, blog, or be on television, and spending a day or two intensely debating alternative scenarios that might emerge from a U.S. decision to bomb or not bomb Iran. We are talking about something more than the “war-gaming” that occasionally takes place; this would be a deeper, broader endeavor that looked beyond the immediate consequences of a policy choice in order to reflect upon and wrestle with the longer-term, unknown futures that U.S. actions might bring. A somewhat similar effort was tried before: President Dwight Eisenhower’s well-known and successful “Solarium” exercise. Imagine a comparable effort, including both outside experts and government decision-makers, incorporating many of the innovations that have emerged since 1953, such as game theory, scenario planning, and detailed historical case studies Not only might novel policy ideas emerge, but a rigorous vetting of contrasting futures could act as de facto contingency planning should a particular policy choice turn out to be wrong. Such an exercise could also sensitize outside experts to the inherent difficulties, tradeoffs, and unintended consequences of making U.S. foreign policy, which might reduce the shrillness and polarization that often characterize policy debates and make expert knowledge more useful and accessible.
The benefits of exercises where pundits and policymakers acknowledge that perfect intelligence is unattainable and where the advantages of both admitting and forgiving honest mistakes about an unknowable, uncertain future are recognized, would be enormous. If nothing else, the humility and flexibility that ensued could lead to more-effective long-range policies. Although such a process may not tell us whether bombing Iran or not is “right,” it will better prepare us for the unexpected, unintended, and challenging consequences that will surely result, regardless of which policy is chosen. Given the enormous long-term stakes of the choices before the U.S. president, it is the least that policymakers and experts can do.
Francis J. Gavin is director of the Robert S. Strauss Center for International Security and Law at the University of Texas and the Tom Slick professor of international affairs at the LBJ School of Public Affairs.
James B. Steinberg is dean of Syracuse University‘s Maxwell School and university professor of social science, international affairs, and law. He served as deputy secretary of state to Secretary Hillary Clinton from 2009 to 2011 and as deputy national security advisor to President Bill Clinton from 1996 to 2000.