Skip to content

Posts from the ‘History’ Category

Review: The Warthog and the Close Air Support Debate

By Taylor Marvin

The A-10 Thunderbolt II is one of the world’s most fascinating combat aircraft. Dubbed the “Warthog”, both lovingly and disparagingly, for its unique appearance, the A-10 was designed as a purpose-built aircraft uncompromisingly dedicated to Close Air Support (CAS), or supporting ground troops in direct contact with enemy forces. CAS has a controversial history within the US military because the mission can arguably be best performed by either the Air Force or the Army; while the Air Force is traditionally tasked with land-based fixed-wing aviation, effective close air support required close coordination with the Army’s ground troops. The Air Force has traditionally been accused of neglecting CAS in favor of the more glamourous air superiority and strategic bombing missions, and the service’s A-10 grew out of a complicated and protracted late-1960s bureaucratic struggle over the future of CAS pitting the Air Force against the Army’s claim that advanced helicopter gunships could fill the hole left by the service’s — in their minds — obvious neglect in the mission. This interservice rivalry and the increasingly-dangerous projected Cold War battlefield resulted in the A-10, a slow, heavily armed and armored aircraft armed with a massive, devastating gun.

51R1Q29YPVL._SY344_BO1,204,203,200_Douglas N. Campbell’s 2003 book The Warthog and the Close Air Support Debate (which I read after the book was noted by Robert Farley) is an excellent history of the A-10, and more broadly the postwar American debate over the best means of providing CAS and which service should fill the role. Campbell takes time to lay out the history of American CAS, beginning with the enormously successful WWII-era P-47, designed as an air-to-air fighter, that convinced the soon-to-be US Air Force that multirole aircraft were the best answer to the CAS mission. This perception was only strengthened by the Eisenhower-era “New Look” defense outlook, which stressed nuclear deterrence and the high-tech, high-flying strategic bombers and air-superiority fighters that the Air Force brass favored. During the Vietnam War relations between the Army and Air Force became more and more strained as the Air Force’s favored fast jets’ high speed, lack of maneuverability, and high fuel consumption made them unsuitable for the CAS mission. As helicopter gunships came into their own, the Army — prohibited from operating most fixed-wing aircraft — came to believe that its advanced AH-56 Cheyenne helicopter concept could provide the answer to the CAS question.

The A-10 was the Air Forces answer to criticisms that it was unprepared to fulfill the need for CAS. Recalling some aspects of contemporary procurement, the A-X program, the forerunner of the A-10, began as a Vietnam-influenced concept primarily dedicated to counterinsurgency, but as the war in Southeast Asia wound down and the US military refocused on the European theater the A-X’s mission shifted to killing Soviet tanks. Unlike previous efforts to realize the CAS role through multirole aircraft also capable of air-to-air combat or bombing missions, the A-10 was entirely dedicated to CAS. Its straight wings and engines made it slow, but also gave it superb low-speed maneuverability and the ability to loiter above battlefields for extended periods, abilities appreciated by ground forces that fast jets were incapable of. Heavily armored and designed to be as survivable as possible, the A-10 could take hits that would kill other aircraft.

But in the 1980s the A-10’s role was once again called into question. The Army, freshly armed with the AH-64 Apache attack helicopter that replaced the cancelled Cheyenne, now felt that the A-10 was less necessary, and the Air Force had decided that multirole F-16 could be a more versatile — and, importantly in some eyes, more glamourous — replacement for the Warthog. While most modern observers now dismiss proposals that the fast, multirole F-16 could replace the specialized A-10, it is important to remember that the Air Force had real concerns over the slow A-10’s ability to survive in the face of increasingly-capable air defense systems, and folding the CAS mission into the F-16 fleet would simplify the service’s maintenance, logistics, and training. As Greg Goebel notes in his excellent history of the A-10:

“While the military has its fair share of dumb SOBs, it also has its fair share of sensible and competent people, and the CAS issue was one in which good people could differ: What you see depends on where you stand.”

The Air Force has a long history of favoring multirole aircraft that ultimately proved unsuited to the CAS mission. But the argument that the A-10 would not survive the European war Air Force officers of the late-1980s were preparing for is not in and of itself unreasonable, and importantly it’s a question we’ll never know the answer to. However, the Air Force’s “A-16” proposal never progressed, and the A-10 famously served through the Gulf War and into the 21st century.

Image via Wikimedia.

Image via Wikimedia.

The Warthog and the Close Air Support Debate focuses on the aircraft’s procurement, rather than combat, history, and contains relatively little description of the aircraft itself. But Campbell’s book is a fascinating look at the politics of military procurement and interservice rivalries, as well as how individual aircraft influence institutional behavior. Campbell’s most important insight is that the A-10’s dedicated single-role mission, rather than the aircraft itself, is its most important feature. Even if the Air Force could somehow adopt a multirole fighter as perfectly suited to the CAS mission as the A-10, pilots would inevitably spend less time training for close air support as other missions competed for their time and attention, an argument with particular relevance to discussions over the ability of the multirole F-35 to replace the A-10 in the CAS role.

Campbell includes many amusing anecdotes as well, including a McNamara Office of the Secretary of Defense staffer (Pierre Sprey, who bizarrely seems to have recorded the chorus sampled in the Kanye West song Jesus Walks) who left a short stint at Grumman Aircraft because “it would be twenty years before they let me design an aileron” and then played a pivotal role in the early A-X program: in Campbell’s words, “as a brilliant and energetic participant who helped ensure that the plane’s design remained practical, he influenced more than an airplane aileron’s construction.” Referencing the Army’s perception that the A-10 existed only to kill their beloved Cheyenne attack helicopter concept, Campbell relays an 1968 Armed Forces Journal cartoon showing

“a winged tank sitting behind a ‘Tactical Air Command’ sign. An Air Force general glares at the craft, while a subordinate says to him, ‘No sir General it won’t fly, but it will sure scare the hell out of the Army!”

The A-10’s unconventional appearance and slow speed also inspired its share of jokes: “What’s the speed indicator on an A-10? A calendar.”

Also mentioned is fascinating obscure trivia from the A-X program. Early in the program mounting a recoilless rifle was studied — which if adopted would have produced a far different aircraft — the A-X program was one of the first to be decided in a competitive flyoff since the 1950s, and the Army consistently referred to helicopter CAS as “direct fire support” to keep its options open by preserving the rational for the Cheyenne while also acknowledging that improved USAF CAS capability would be nice.

The book’s main shortcoming is its brevity. Campbell covers the flyoff between the Northrop A-9 and the winning Fairchild Republic A-10 in only a few pages, and in particular devotes little time to the engineering decisions led to each prototype’s differing design schemes. While Campbell briefly discusses foreign CAS, notably the IDF’s experience, more information would be valuable to contextualizing the American CAS debate. Additionally, the book’s scope is limited by its 2003 publication date: Campbell covers the post-Gulf War period only in the book’s conclusion. Today the debate over the future of CAS is dominated by questions over drones, the ability of advanced precision-guided munitions to allow non-tradition aircraft to fly CAS, and the real-world capabilities of the F-35, which is intended to fill the A-10s CAS shoes. Given The Warthog’s publication date, Campbell is unable to discuss these questions. Despite this, The Warthog and the Close Air Support Debate is a fascinating book, and is recommended for anyone interested in the A-10, military procurement, and interservice politics.

Histomaps and Euro-Centric Histories

HistomapFinal.jpg.CROP.article920-largeBy Taylor Marvin

At Slate, Rebecca Onion highlights a gorgeous vintage poster that claims to illustrate “4,000 years of world history.” This “Histomap” — created by John B. Sparks in 1931 — attempts to show the waxing and waning power of rival civilizations graphically and, in Onion’s words, “emphasizes domination, using color to show how the power of various “peoples” (a quasi-racial understanding of the nature of human groups, quite popular at the time) evolved throughout history.” Click through to Slate for an expanded view.

I’ve previously encountered the Histomap, though I wouldn’t have remembered it until seeing Onion’s post — if I recall correctly, in 7th grade a teacher showed a copy to my class during a world history lesson. But looking at the chart today, what’s most apparent is just how dated the Histomap’s view of history is; specifically, Sparks presents an enormously Western Europe-centric view of world history. This perspective draws from the chart’s vague definitions, which allow its estimations of various people’s “relative power” to fit Sparks’, and the 1931 Western culture he represents, own biases. If relative power derives from the size of empires, why do the 15th century Incas appear so minusculely insignificant? Similarly, in the first century AD the Roman Empire and Han Dynasty both controlled roughly a fifth and a quarter of humanity. By what possible criteria could Rome hold two thirds of “world power” and China almost none? Why is the 16th century Spanish Empire, which controlled one of the largest empires in history, ranked as significantly less powerful than England?

Ultimately the Histomap reflects, of course, its author’s contemporary biases rather than any real historical realities (not that this reality would be at all possible to convey is such a simplistic format). Western civilization defines itself as the heir to the ancient Greek and then Roman civilizations through early-modern Western (importantly, not Mediterranean or Catholic) European intermediaries, a self-appointed narrative much stronger in Sparks’ era than today. It’s unsurprising, but deeply illuminating, that the Histomap highlights these cultural traditions at the expense of others.

Note: To emphasize the Histomap’s uncomfortable racial connotations, Sparks’ “Histomap of Evolution” charts the history of human civilizations as well as those of “mollusks” and “protozoa”. With the implication being that human ethic groups are as biologically separate as zoological taxa, this view of history is one of the clearest example of the Social Darwinist philosophy imaginable.

Vanished Territories, Borders, and Names

By Taylor Marvin

Today I stumbled across a friend’s copy of a 1970 edition of the National Geographic Society’s world atlas. Perhaps inspired by Norman Davies’ Vanished Kingdoms: The Rise and Fall of States and Nations, my current read, I began noting the territories and states that existed four decades ago, but no longer. A selection of the lesser-known:

canal zone

The US Panama Canal Zone was disestablished in 1979 and fully handed over to Panama in 1999 in accordance with the 1977 Torrijos–Carter Treaties. Incidentally, the Panama Canal Zone is also the birthplace of John McCain.


East Pakistan, which became the independent nation of Bangladesh after (West) Pakistan’s defeat in the 1971 Bangladesh Liberation War.


British Honduras, which gained independence in 1981 as Belize, the only country in Central America with English as an official language.

neutral zone

The “neutral zone” along the Saudi Arabian-Iraqi border, implemented in 1922 and only definitively solved by the 1991 Gulf War.


The British colony of Souther Rhodesia, named for 19th century British imperialist Cecil Rhodes, unilaterally declared independence in a 1965 bid to preserve white supremacy. Unable to secure international recognition and beset by guerilla movements, Rhodesia was succeeded by Zimbabwe in 1980.

south west africa

After Germany’s defeat in the First World War German South-West Africa fell under South African administration. Apartheid South Africa’s attempts to informally incorporate the territory in the face of local independence movements proved unsuccessful, and South-West Africa declared independence as Namibia in 1990.

upper volta

Gaining independence from France in 1960, the Republic of Upper Volta (named for the Volta Rouge, Volta Noire, and Volta Blanche rivers) was renamed Burkina Faso in 1984.

spanish sahara

Following the end of Spanish colonial administration in 1975, Sáhara Español’s status remains in doubt. Today Western Sahara is divided between the Moroccan-controlled north and western coast and the partially recognized Sahrawi Arab Democratic Republic, and holds the distinction of being both one of the most sparsely populated territories on Earth and the most populous of the United Nations list of non-self-governing territories.

trucial states

The Trucial States is an antiquated name for the British Protectorate that became the U.A.E. in 1971. Note the “Dubayy” spelling.*


A high-water mark of Pan-Arabism, 1958 saw the short-lived attempt to unite Egypt and Syria into the United Arab Republic. While Syria left the union in 1961, Egypt continued to use the U.A.R designation until 1971.


The Yemen Arab Republic (North) and People’s Democratic Republic of Yemen (South) merged in 1990, which observers hoped would end the Cold War-era rivalry between the two and unify the southwest Arabian peninsula. However, South Yemen seceded in 1994 and was shortly after conquered by the north, again unifying Yemen.

*Update: See comment below.

Do Constitutional Monarchies Lead to Stability, or the Other Way Around?

By Taylor Marvin

Allan Ramsay, 'King George III'. Via Wikimedia.

Allan Ramsay, ‘King George III’. Via Wikimedia.

At the Washington Post’s Wonkblog Dylan Matthews provocatively argues that constitutional monarchies are a sounder form of government than presidential republics or parliamentary democracies with a largely-ceremonial head of state. Constitutional monarchy is, Matthew writes, “at worst, fully compatible with representative democracy, and, at best, makes representative democracy stronger.” In his excellent blog Suffragio Kevin Lees pokes holes in Matthews’ pro-monarchical argument, noting that many of Matthews points make the mistake of confusing correlation with causation. In particular, Lees attacks Matthews’ comparison highlighting constitutional monarchies’ above average GDP per capita and life expectancy:

“There are a lot of historical and economic reasons that explain why constitutional monarchies, which are predominantly located in Europe, are so much richer and healthier. North America and Europe are, well, richer than Africa or the Middle East or South America, in general terms, but it seems like ‘having a constitutional monarchy’ is not incredibly high on the list of reasons why Europe’s standard of living is so much higher than Africa’s. The legacy of colonialism, for one.”

Of course Lees is correct, and his piece is very much worth reading. But even within Europe, I think you can take this point farther. As Lees notes, Matthews’ (admittedly tongue-in-cheek) analysis isn’t simply misattributing contemporary Western Europe’s high development levels to its relatively common monarchical forms of government, it also mistakes monarchies’ survival as a cause, rather than result, of historical social stability. Only the rare monarchy has survived to the present, and in Europe only in constitutional form. Generalizing, this political continuity is more likely to occur in European countries with greater historical stability — in more unstable countries, early modern era monarchies were gradually screened out by revolution or political instability.

The two metrics Matthews cites — life expectancy and GDP per capita — are both typically dependent on historical trends. Countries with leading per capita incomes today tend to have seen relatively constant, steady growth for decades, growth that is often indicative of stable public institutions, growing human capital, and durable market economies. In addition to steady growth and rising incomes, these same social institutions are also associated with political stability.

At the risk of oversimplifying the various national contexts that allowed surviving monarchies — those of Andorra, Belgium, Denmark, Liechtenstein, Luxembourg, Netherlands, Norway, Spain, Sweden, and the UK — to avoid the (often literal) guillotine, monarchical continuity is a proxy for historical social stability. Later in the piece Matthews admits that constitutional monarchy doesn’t cause higher development levels, but is merely compatible with them. But really, it’s again the other way around: monarchies are more likely to survive in European countries with the same historical traits associated with higher development levels today.

Of course, in Europe at least the monarchies that survived were constitutional monarchies, while absolutist rulers were more often overthrown. But again, this is more likely an effect rather than a cause of social stability — the independent institutions and alternative political power centers able to coerce monarchs to cede power were conductive to later economic growth.

Rehabilitating Pinochet?

Image by Archivo Clarín Argentina, via Wikimedia.

Image by Archivo Clarín Argentina, via Wikimedia.

By Taylor Marvin

Following the recent coup in Egypt, the Wall Street Journal posted a fairly run-of-the-mill editorial in favor of President Mohamed Morsi’s ouster. Arguing that the polarizing and Islamist Morsi government necessitated a military coup, the Wall Street Journal expressed hope that the Egyptian military would wisely steer Egypt back to democracy and resist the urge to govern the country directly. Accusing it of “trailing events at every turn,” the op-ed’s authors also denounced the Obama administration’s foreign policy, while neglecting to admit that the US has little ability to positively influence events in Egypt, and even less ability to foresee them — again, a fairly typical argument from the Journal.

However, in its last paragraph the op-ed veers into what can only be considered at best wildly historically myopic, or more likely simply deeply offensive:

“Egyptians would be lucky if their new ruling generals turn out to be in the mold of Chile’s Augusto Pinochet, who took power amid chaos but hired free-market reformers and midwifed a transition to democracy. If General Sisi merely tries to restore the old Mubarak order, he will eventually suffer Mr. Morsi’s fate.”

This is, to put it mildly, insane. After participating in and then subsuming the military junta that overthrew the democratically-elected government of Salvador Allende in 1973, Pinochet personally ruled Chile for nearly two decades. In that time he oversaw the deaths of 3,000 people (in a country of 13 million in 1990) and torture and execution of democratic activists, fought all meaningful democratic reform, and nearly fought what would have been an entirely-preventable conflict with Argentina. Ultimately, Pinochet left power not out of some respect for democracy, as the Journal seems to believe, but when he was essentially forced out. If the Wall Street Journal’s editors had any respect at all for Pinochet’s victims — or, perhaps more pertinently, any understanding of the legacy of his regime — they would not hold Pinochet as an example for Egypt’s newly re-empowered generals.

As Colin M. Snider writes, this argument is “vile, disgusting, repugnant, vulgar, and ignorant.”

But perhaps more interesting is what this op-ed represents. The Pinochet regime has long enjoyed some cachet among American conservatives, both for the regime’s anti-Communist stance and neoliberal economic reforms, and during his tenure Pinochet enjoyed close ties with the both the US government and neoliberal economists, notably Milton Friedman and Friedrich Hayek. With the end of the Cold War American elites had much less incentive to support anti-leftist Latin American military dictatorships, and generally turned away from previously-favored right-wing autocracies. But due to his free-market reforms and Chile’s subsequent economic growth the Pinochet regime continued to enjoy some degree of respect that other, once similarly favored regimes like the pre-1982 Argentine junta and Paraguay’s Stroessner regime gradually lost. This respect continued beyond Pinochet’s ouster, with American conservatives especially often rhetorically conflating arguments highlighting the regime’s economic success with some nebulous endorsement of it, while downplaying Pinochet’s crimes and the growth in Chilean inequality he oversaw.

But American economic conservatives ready to celebrate the Pinochet regime’s economic policies are usually quick to denounce its autocratic nature, even while implicitly endorsing the regime overall. This position stems from a somewhat understandable dilemma. In the American elite imagination the Pinochet regime is most often offered as clear-cut economic success story — acknowledge the regime’s crimes and the whole narrative edifice threatens to come crashing down. Some commentators attempt to streamline this historical narrative by insisting that while Pinochet was a brutal dictator the Communist-leaning Allende government it overthrew would have been worse. While this plays into American Cold War biases and draws on the specter of leftist insurgencies elsewhere in Latin America, it’s also a counterfactual, and ultimately not very convincing.

Given this rhetorical challenge — the contemporary conservative need to condone Pinochet’s economic policies while also denouncing its abuses — the Wall Street Journal simply elected to avoid the narrative bind entirely, drop the qualifications, and endorse the Pinochet regime whole-heartedly. Admittedly the op-ed only mentions Chile in the last paragraph and is focused on another issue, but this failure to qualify its celebration of Pinochet at all remains noteworthy.

Pithily noting that “anyone familiar with the political views of the WSJ’s editors couldn’t have been too surprised,” Daniel Larison sees the Pinochet reference as a predictable repurposing of American foreign policy tropes to fit a new situation:

“On one level, it was just an old rehashing of Cold War-era justifications for U.S. support for anticommunist authoritarian rulers, except that Islamists were now filling the role that communists and socialists used to play. On another, it was a fairly predictable expression of support for perceived ‘pro-American’ forces abroad even if they happened to be military officers engaged in a coup against an elected government.”

This is of course correct. But it’s possible that there’s something else here. The Pinochet regime is now nearly a quarter century in the rearview mirror. With this growing historical remoteness, it would be unsurprising if American conservatives gradually drop their qualifications when arguing in favor of the regime’s economic policies. After all, noting that a regime best-known (in the United States I don’t think this is an exaggeration) for its arguably-beneficial economic policies was also a reprehensible, anti-democratic dictatorship complicates the narrative. Given that the Pinochet regime is most often mentioned in the US as an appropriated tool in American economic policy debates, this complexity is relevant, and unwanted. As time goes by I would not be surprised if explicit endorsements of the Pinochet regime like the Wall Street Journal’s become more and more common.

Correction: This piece originally misidentified the Wall Street Journal editorial as an op-ed.

Understanding the Space Race

By Taylor Marvin

Image via Wikimedia.

Image via Wikimedia.

In late January Iran made the startling announcement it had successfully launched a monkey into space. Claiming to have sent the monkey on a twenty minute suborbital flight, the launch was showcased as a demonstration of the Iranian regime’s technical ability. But international observers quickly noticed that the monkey recorded entering the capsule didn’t resemble the one showcased after the flight, an embarrassing inconsistency the Iranians chalked up to a botched photo release.

Deception aside, this story is a reminder that the drama of space exploration, genuine or faked, remains a powerful tool for building national prestige. At a time of enormous sanctions-imposed economic strain, Iran claims its recent test flight is a prelude to one day sending a human into space. Human spaceflight ambitions aren’t limited to political-outcast Iran. In 2003 China became the third country to send a human into space, and plans to send a taikonaut to the Moon by at least 2020. India has also articulated tentative ambitions for its own crewed space program at some point in the future.

But despite the growing number of nations expressing space ambitions today’s achievements in crewed spaceflight still fall short of the Space Race, the famed Cold War rivalry between the United States and Soviet Union that saw the world’s first satellite launch, first human in space, and, climactically, the Moon landings. This modern shortfall fits the broader pattern of the post-Space Race era: after the the American Apollo lunar landing program ended in 1972 the practical ambitions of crewed space programs, in contrast to contemporary forecasts, dramatically declined.

Clearly, high-profile achievements in space remain an alluring goal for prestige-minded governments. But any framework explaining why governments chose to invest in civilian space programs must also explain why no human has ventured beyond Earth orbit since 1972. Did space exploration become less prestigious after the end of the Apollo program, or did the conditions that precipitated the Space Race somehow fundamentally change? How do today’s aspiring space powers like Iran fit into this framework?

On July 20, 1969, Neil Armstrong stepped onto the Moon’s surface and into history. Optimistic observers celebrated the Apollo 11 landings as the birth of a new era in human exploration. Apollo would be followed by further, far more ambitions crewed exploratory programs – Moon bases, Mars landings, and crewed flybys of Venus filled the dreams of NASA planners. But instead of heralding a new beginning, today the Apollo program is seen as the end of an era. New budgetary realities dawned, and the US and USSR restricted their crewed space programs to Earth orbit. Today, 44 years after Apollo 11, the ambitious dreams of crewed missions beyond the Moon have not materialized.

Perhaps depressingly, this dramatic shortening of ambitions isn’t puzzling, because the Space Race was never really about exploration at all. Instead, the triumphs of Sputnik, Vostok, and Apollo were driven by the cold cost-benefit analysis of hardened Cold Warriors. Crewed space programs are long-term projects that require massive, front-end investments with no guarantee of success – national governments do not invest in them for idealistic reasons. Consequently, governments that elect to pursue crewed space programs perform sophisticated cost-benefit analysis before embarking on them. These costs and benefits move together depending on a program’s goal: more ambitions programs will cost more, but can intuitively be expected to return a greater boost to national prestige and international standing.

This cost-benefit analytical framework is the key determinant of whether governments elect to fund ambitious crewed space exploration. The most obvious benefit of human spaceflight – which captures public attention in a way uncrewed exploration does not – are heightened domestic pride and international prestige; other benefits can include technical advancements and economic stimulus in strategic science and engineering sectors. Both an increased sense of nationalistic pride among domestic audiences and prestige on the world stage is a valuable good for governing regimes. However, the value policymakers assign these prestige-driven benefits is not decided in a vacuum. The practical value of marginal gains and losses of national prestige is driven by politics. Unpopular leaders facing domestic unrest will benefit more than secure ones from increased national pride among their selectorate. Similarly, international prestige is more valuable for states facing a hostile world system than an unthreatening one.

The costs of crewed space programs are obvious, but vary in nonintuitive ways. First, some objectives are more expensive to pursue than others. Secondly, some of the technologies required for crewed space exploration have military applications; particularly, rockets. These “dual-use” technologies allow policymakers to clear civilian space programs’ technological barriers with military development they would fund anyway, reducing the dedicated cost of the program.

If the decision to heavily invest in civilian space programs can be understood as a cost-benefit calculus, the uniquely dramatic achievements of the Sputnik-through-Apollo era must be explainable by a similarly unique confluence of inputs. This appears to be the case. The US-Soviet space race was the unique product of a bipolar, ideologically divided international order and transient period of technological development that allowed civilian space programs to heavily leverage military necessities. The Space Race ended when these costs and benefits diverged. After the Apollo program ended the expected investments required for further ambitious civilian human spaceflight achievements grew, while the extent these prospective achievements’ prestige would contribute to national security fell.

First, the benefit side of the equation. The Cold War divided the world along ideological lines, with the twin Soviet and US-led blocs surrounded by a periphery of nonaligned states. In this bipolar system each opposing bloc sought to favorably shift the balance of power by attracting ideological allies. This made national prestige enormously important. The US and USSR both sought to attract unaligned nations to their respective camps by demonstrating the military and technological superiority of their system, superiority that was seen as evidence of eventual victory.

John Glenn aboard 'Friendship 7', 1962. NASA image via Wikimedia.

John Glenn aboard ‘Friendship 7’, 1962. NASA image via Wikimedia.

Spaceflight was a vital arena of this competition for prestige. News of space achievements, President Kennedy argued in a 1961 speech, had a powerful impact “on the minds of men everywhere, who are attempting to make a determination of which road they should take.” Importantly, these demonstrations were understood not only as peaceful achievements, but also as PR-friendly proxies for military prowess. Americans greeted the unexpected launch of Sputnik with something like panic, realizing if the Soviets could put a satellite in orbit, they could do the same with a nuclear warhead.

Second, the cost. Space Race-era programs were enormously expensive; at its height NASA funding consumed over four percent of American federal spending. However, the era’s crewed space programs benefited from a unique synergy between civilian and military technological development. The new technologies required to put the first men in orbit – powerful rockets, dependable guidance systems, and heat shields that allowed a spacecraft to survive reentry into the Earth’s atmosphere – were the same developed in the quest to construct nuclear-armed intercontinental ballistic missiles (ICBMs). Early nuclear weapons, particularly thermonuclear devices, were heavy objects that required powerful rockets to deliver to their targets. These rockets were easily adapted into civilian launch vehicles: President Eisenhower once explicitly noted that the military rocket engines required to deliver nuclear warheads were also “so necessary in distant space exploration.”

Much like the ideological rivalry between the US and USSR made civilian prestige projects a determinant of the balance of power, the military rivalry between the two superpowers and emerging awareness of the primacy of ICBMs in nuclear war made these technological developments top priorities. As deployed ICBM numbers rose the technologies required to put men into space were materializing, regardless of the value policymakers assigned exploration. It is difficult to overstate the role dual-use military developments played in allowing the early achievements that opened the Space Race.

This dual-use synergy allowed US and Soviet policymakers to leverage technology already in development for their civilian space programs. But importantly, there is no inherent reason why the technological requirements of civilian space programs and the cutting edge of military development must align. Indeed, this dual-use synergy was transient, and began to break down by the late Space Race. Medium-lift liquid fuel rockets similar those powering early ICBMs are the dominant technological hurdle only in comparatively primitive civilian space programs. Once these rockets matured new hurdles less related to military requirements began to appear – for example, the heavy-lift Saturn V rocket and lunar lander vital to the Apollo program had little technological relevance to military armaments.

By the mid-1960s the preconditions that spurred the Space Race had clearly changed. Funding for crewed space exploration evaporated in both the US and USSR. In America, once it became clear that the Apollo program would be a success NASA’s budget as a percentage of federal spending fell precipitously. The final Apollo missions were cancelled, as was the Apollo Applications Program, intended to adapt existing Apollo hardware to ambitious new missions. Likewise, the Moon bases and crewed missions to Mars early space planners and science fiction authors judged just around the corner never materialized.

Why? Space achievements had not grown less prestigious. To be sure, Americans lost interest in the Apollo Moon landings as the novelty wore off, but that does not mean unprecedented achievements would not have remained a powerful tool for building national prestige. Instead, the value policymakers placed on the benefits of national prestige had changed along with the international order.

The Space Race was conceived during some of the hottest years of the Cold War – Sputnik 1 was launched in 1957, five years before the Cuban Missile Crisis. But by the time the Apollo program landed astronauts on the Moon, the dynamics of the Cold War were changing. The Nixon-era détente between the US and USSR relaxed tensions, making it harder for policymakers to justify expensive prestige projects on balance of power grounds. But of course, détente did not last, and the Soviet invasion of Afghanistan and President Reagan’s “evil empire” rhetoric made the 1980s one of the most dangerous decades of the Cold War.

Soviet 'Buran' spacecraft, 1988. Via Wikimedia.

Soviet ‘Buran’ spacecraft, 1988. Via Wikimedia.

But if Cold War tensions were so high, why did another civilian Space Race fail to materialize during the 1980s? Clearly, the prestige motivation had not vanished. President Reagan, eager to regain the American national prestige he perceived as lost in Vietnam and Carter-era malaise, pushed for an aggressive Space Shuttle launch schedule that contributed to the Challenger disaster. But despite heightened Cold War tensions, the political benefits of ambitious space spending were now lower. Spaceflight as a whole were no longer novel, making it arguably less impressive and high-profile. Adversaries’ achievements also became less threatening. Unlike during the opening days of the Space Race, Americans could not spin Soviet space achievements as a threatening aspect of a “missile gap” because by the 1980s ICBMs were a proven, stockpiled weapons technology.

But the cost side of the ledger was what shifted the most. First, the dual-use synergy between civilian and military space technological development largely vanished. Unlike the advances in rocketry of the 1950s and 1960s, by the 1980s the technical requirements of civilian and military space programs had diverged, making broadly dual-use technologies rare. Staged rockets that powered ICBMs were now mature technologies, and later missile development worked towards improved accuracy and increased survivability. Expanding crewed space exploration beyond the Moon would require major progress in novel propulsion technologies, life support, system reliability, and automation. All of these advancements had only tangental military relevance. Instead, the military space programs of the post-Apollo era brought research funding to technological fields unconnected with crewed spaceflight. The Reagan-era Strategic Defense Initiative, an ambitious ballistic missile defense scheme, focused research on laser and missile interception technology. None of these military projects spurned major advancements in dual-use technologies that could be leveraged for new, ambitious crewed space programs. This remains largely true today.

Secondly, the post-Apollo space establishment suffered from a lack of clear, obvious goals. This was not the case for the classic Space Race: first, put a satellite in orbit; then, a man; finally, the Moon. But after Apollo, the next goal of crewed space exploration was unclear. Mars was an obvious, high-profile choice, but a crewed mission to Mars likely would have been much more difficult than the Apollo program, and national leaders never pushed for one in a serious way. To be sure, NASA had grand preliminary plans for human exploration beyond the Moon, but funding – and likely, technical capabilities – for these ambitions missions were never available. This absence of a obvious, achievable goal hampered prospective Reagan-era and later American Cold War crewed spaceflight programs.

This cost-benefit framework offers an explanation for why the US and USSR invested heavily in crewed space programs during the 1950s and 1960s, but not during last decades of the Cold War. While the international system has changed immeasurably since the dissolution of the Soviet Union, this same cost-benefit logic drives today’s policymakers’ decisions to invest in crewed space programs.

Again, first the benefit side of the tradeoff. High-profile crewed space achievements remain impressive. While modern China and India may not be ideological states in a bipolar world, they still retain significant prestige-motivations for crewed space programs. This is particularly true for China, which seeks to improve its position in the world order through demonstrations of economic, military, and technological power. Much like the 2008 Beijing Olympics, to Chinese policymakers the civilian space program – here “civilian” is a description of goals rather than administration, as China’s crewed space program is run by its military – is intended to cement China’s great power status in the minds of international observers. But importantly, China’s prestige-driven impetus for space investments is nowhere near that facing the security-minded Cold War-era US and USSR. This lower value assigned to the benefits of space achievements is reflected in the relatively relaxed priority of China’s crewed space program: China has achieved notable successes in space, but the pace of its efforts is not comparable to the Space Race. Clearly, China – which isn’t facing a potentially existential conflict with an ideological foe – does not judge space gains to national prestige as valuable as the Cold War rivals. This, of course, makes sense. For China, prominent achievements in human spaceflight are a means of bettering its international position, not a top-priority national security issue.

Importantly, all of today’s new or aspiring space powers have only replicated the feats accomplished by the Soviets and Americans a half century ago. This, again, is practical: as today’s comparably peaceful international order lowers the value of national prestige projects, aspiring space powers accordingly set their aspirations lower. The comparatively modest scope of these practical ambitions – “been there, done that,” in the words of uncharitable American observers – also allow new space powers to benefit from the dual-use synergy between military and civilian rocket technology, allowing them to reap prestige benefits from the ICBM technology they pursue anyway. In lower capability states aspirations to extend rocket development to human spaceflight may only be a rhetorical public relations stunt. Indeed, Iran’s space program is frequently alleged to be noting more than cover for ballistic missile development.

During the 1950s and 1960s a bipolar international order and a fortuitous alignment between the technologies required for civilian space exploration and nuclear deterrence combined to create the conditions that motivated heavy investments in civilian space programs. This is not an exaggerated description – the only reason the Space Race occurred was that the US-Soviet rivalry happened to coincide with the period when long-range military rockets were an emerging determinant of the balance of power. Without this synchronicity between an adversarial international system, conflation of national prestige and security, and convergence of civil and military space technological requirements, the Space Race would not have materialized. Barring a massive fall in the expected costs of ambitious human exploration, this logic suggests that the aspirations of new and aspiring spacefaring nations are unlikely to surpass the Space Race unless the international system reverts to the hostility of the Cold War’s height.

Why No Mediterranean Buddhism?

By Taylor Marvin

Image by Classical Numismatic Group, via Wikimedia

Image by Classical Numismatic Group, via Wikimedia

I’m currently reading International Systems in World Historywhose authors Barry Buzan and Richard Little raise a fascinating question: why did Buddhism, which originated in northern India but was successfully exported through South and East Asia, never take root in the West?

Buddhism was spread throughout Eastern Asia through Indian trade networks. However, these networks were never able to expand westward, because overland trade routes through the Iranian plateau and seaborne trade through the adjacent Persian Gulf were blocked by the Parthian empire, and its post-3rd century Sassanid successor. The Parthians’ strategic position south of the Caspian Sea allowed them to extract rents from trade between the rich Chinese and Roman empires that bookended Eurasia. Conscious of the profitability of this position, the Parthians deliberately preserved their valuable middleman status by keeping Roman and Eastern traders from ever directly interacting — going so far as to once mislead a Chinese expedition aiming to make contact with the Romans — which would allow them to realize how costly the Parthian intermediary was. This policy blocked direct trade between the Mediterranean and East Asia, as well as the cultural diffusion inherent to direct trade networks.

The Romans were aware of this costly impediment to direct trade. Concerned about their trade deficit with China, the Romans tried but failed to bypass the Parthian intermediary. Overland trade routes north of Parthian control were far outside of the Roman sphere of influence. The Emperor Trajan’s short-lived conquest of Mesopotamia may have been partially motivated by the desire to secure a Persian Gulf seaport that would allow traders to bypass the Parthian empire and lower trade costs. However, the Romans were unable to hold Trajan’s overextending conquests, though they periodically contested western Mesopotamia for the next two centuries.

This is particularly interesting because Buddhism would likely have found a receptive audience in the West. Rome was famously receptive to Eastern religions — with the exception of Judaism and later Christianity, whose strict monotheism Roman authorities perceived as a threat to the civic responsibility of the emperor cult — and it isn’t unreasonable to speculate that Romans would have adopted Buddhist teachings with the same enthusiasm as the Cybele or Isis cults. This isn’t to say that the spread of Buddhism into the Mediterranean basin would have prevented the eventual dominance of Christianity — which, after all, outcompeted numerous other religious systems — but the spread of Buddhist culture into the Roman world would have certainly had led to a fascinatingly different Europe.

Alternative history writers, take note.

Correction: I later edited this post to clarify that the economic motivation for Trajan’s invasion is a theory, and added additional links.

Armageddon Averted

By Taylor Marvin

I’m currently reading Stephen Kotkin’s Armageddon Averted: The Soviet Collapse, 1970-2000, which is short, highly-readable, and recommended. Kotkin has a flair for drily understating the farcical:

“Glasnost remained mostly a slogan right through 1986. Even geographical locations that could be indicated on Soviet maps were still being shown inaccurately, to foil foreign spies, as if satellite imaging had not been invented, while many cities were entirely missing… Widespread fictitious economic accounting was foiling planners to the point where the KGB employed its own satellites to ascertain the size of Uzbek cotton harvest.”

Kotkin also relays this gem from the Gorbachev-Yeltsin transition:

“On 27 December, four days prior to the date Gorbachev was supposed to vacate his Kremlin office, the receptionist called him at home to report that Yeltsin and two associates were already squatting in the coveted space, where they had downed a celebratory bottle of whisky. It was 8:30 am.”

Khrushchev, Iran, and Bad Historical Analogies

By Taylor Marvin

Confrontation at sea. US Navy photo via Wikimedia.

On a recent episode of NPR’s Weekend Edition commemorating the 50th anniversary of the Cuban Missile Crisis host Rachel Martin spoke with Graham Allison of the Kennedy School of Government about the foreign policy lessons of the crisis. Matin raised the issue of Iran’s nuclear ambitions, noting that Israeli Prime Minister Benjamin Netanyahu suggested the comparison during his recent speech before the UN General Assembly where he argued President Kennedy prevailed by setting a red line that “prevented war and helped preserve the peace for decades.” Allison agreed with Netanyahu’s analogy, and ended the discussion by noting that the current dispute with Iran leaves no good options on the table:

“So I think we’re now into a season where I would hope that after the election, whomever is elected will become intensely focused and inventive about options that are not very good — I call them ugly options, very ugly options — but that would nonetheless be better than attack or acquiesce.”

Of course Allison is right — there are no good options here, despite neoconservative protestations to the contrary. However, I’m not convinced that the comparison between the Cuban Missile Crisis and the current dispute with Iran holds up in any meaningful way to scrutiny.

President Kennedy and Premier Khrushchev were able to avert war because each preferred a negotiated compromise to fighting. The Soviets had initially put nuclear weapons in Cuba — contrary to their previous assurances to Kennedy that they would not — out of a desire to remedy their strategic missile imbalance with the US, credibly detere a US invasion of the island on behalf of Castro, and possibly as a future bargaining chip. The Soviets eventually withdrew the missiles from Cuba in exchange for rather meaningless concessions from the US: a tacit pledge not to invade Cuba and a secret pledge by Kennedy to remove obsolete Jupiter IRBMs from Turkey, whose secret nature did not allow the Soviets to present the concession as a victory. Ultimately the Soviets backed down because they knew, unlike the Americans, that there were already armed nuclear missiles in Cuba that would certainly be unilaterally launched by local commanders in the event of a US invasion or airstrikes. While the end of the crisis was a disaster for the Soviets, even the final settlement’s weak US pledge not to invade and the secret removal of the Jupiters was preferable to escalation towards an American invasion, which the Soviets alone knew would certainly lead to nuclear war.

This logic does not extend to the US and Israel’s confrontation with Iran, because it is unclear if Iran holds war with the US or Israel as the worst possible outcome. A nuclear strike or ground invasion by the US against Iran is clearly off the table — at worst, a war between the US and Iran would mean an ongoing air campaign against military targets, naval warfare in the Gulf, and an Iranian terror campaign against American targets abroad and, through its Hezbollah proxy, Israel. Barring an exceedingly unlikely mass uprising by the Iranian populace against the government, this is a survivable outcome for the regime.

Of course, survivable does not necessarily equal preferable. But there are reasons to think the Iranian regime would hold a limited US attack as preferable to publicly walking back from its nuclear program. The nuclear program remains popular within Iran, though support for the program has fallen. If the Iranian regime was popularly perceived to have been forced to abandon nuclear development the program’s popularity would undoubtably rise through a “lost cause” mentality. Backing down in response to foreign pressure would likely be extremely politically risky for policymakers, and would be perceived national embarrassment that would generate push back both from conservative sectors of Iranian society and hardliners within the Iranian government whose opposition to the United States is an integral part of their political DNA. Even if decisionmakers in Iran wanted to abandon the nuclear program, these domestic audience costs within and outside of the regime would make it difficult to do so. Entirely justified US concern over Iran’s history of misleading the international community would make it difficult for IRI leaders to use private negotiations to sidestep these audience costs.

A war would certainly be painful for Iran: the broad US air campaign against Iranian nuclear and air defense targets required to delay the Iranian nuclear program by up to a decade would certainly kill numerous civilians, and would destroy difficult to replace military infrastructure. A wider conflict sparked by Iranian retaliation would be more costly. However, an American strike would not be a disaster for hardliners within the Iranian government. As the Saddam Hussein painfully learned in 1980, the Iranian people are quick to rally against a perceived aggressor — despite American protestations, a strike targeting the nuclear would be viewed as an unprovoked attack on their homeland by the vast majority of the Iranian population. A strike woud solidify the position of hardliners, and give them a political blank check to resume terrorist violence abroad, as well as instantly discredit potential liberal reformers both within the regime and in Iranian civil society.

From the perspective of IRI hardliners an Israeli strike would bring greater political benefits — antisemiticly-charged domestic anger and the marginalization of their political opponents — with significantly less damage to both the nuclear program and military infrastructure than a more capable American strike. A much higher priority drive towards nuclear capability would soon follow, with great popular support.

In the NPR interview Allison remarks that Netanyahu’s reference is basically correct “with respect to red lines and the ways they can constrain the competition, and therefore contribute to preventing war.” The problem is that, depending ideologies of key Iranian decisionmakers, the relevant red line Tehran will respond to may lie beyond limited war. The Cuban Missile Crisis ended peacefully because both actors viewed their ultimate settlement preferable to war, and the Soviets accepted a lopsided agreement because they recognized that the cost of war would be higher than the Americans did (as only they knew, with nuclear weapons armed in Cuba, a US invasion would initiate nuclear war). This peaceful outcome would not have been possible if both sides were willing to “escalate through” war before they reached their minimum acceptable outcome. If Iranian policymakers’ domestic audience costs and ideology influence them to hold a limited war preferable to backing down on the nuclear issue, the Cuban Missile Crisis is a bad historical analog. In fact the Iranian conflict’s long duration — “like a Cuban Missile Crisis in slow motion” — substantially raised Tehran’s domestic audience costs by making their commitment to the right to develop domestic nuclear energy more firmly anchored in the minds of Iranians.

In 1962 Khrushchev was able to make disproportionate concessions — withdrawing nuclear weapons from Cuba without being able to publicly reveal the US’ less strategically valuable parallel Jupiter concession — because he enjoyed a position to dictate Soviet policy. It is not clear if Iranian policymakers today have this same flexibility, as hardliners and their conservative constituents still enjoy considerable power in the IRI government; certainly more than their reformist opponents. The importance of hardliner political support means that allowing Iran to be coerced out of the nuclear program would likely be more politically costly for current policymakers than the Cuban Missile Crisis’ lopsided diplomacy was for Khrushchev. However, Iran’s dual government of the elected presidency and parliament and unelected Supreme Leadership offers an interesting “escape valve” to the avoid domestic audience costs associated with acquiescing to foreign pressure. If Khamenei reaches the decision that the nuclear program is not worth the costs — there is evidence that the drive for nuclear capability is not set in stone — he could place responsibility for the nuclear program, and the costs of abandoning it, on the increasingly marginalized President. As Kenneth M. Pollack relates in his excellent if dated history of US-Iranian relations, both Supreme Leaders have frequently used this tactic to walk back from policies they came to regret.

Note: Immediately after wrapping up this post I noticed that Michael Dobbs has a piece on the same topic up at Foreign Policy, though he examines the question from a different angle. Check it out. Daniel Larison also has a post up today about the increased chances of war with Iran under a Romney presidency.


On the subject of the Cuban Missile Crisis, check out The Atlantic’s excellent photo collection commemorating the crisis.

Conscripts in the Soviet Union

By Taylor Marvin

Soviet soldier armed with an AK-74 rifle, 1980. Image by Wikimedia user J.Lemeshenko.

Today I stumbled upon an interesting used book: The Soviet Union Today: An Interpretive Guideedited by James Cracraft and released in 1987.

I’m just beginning to work my way through the volume, but Mikhail Tsypkin’s chapter on Soviet conscripts is both fascinating and depressing. In a system that put little value on the individual, the two year term of service for conscripts in the Red Army was miserable and dangerous, even in peacetime. A brutal hazing culture where veteran second-year conscripts brutalized their younger comrades functioned as an officially-sanctioned method of enforcing discipline and passing on skills. Drunkenness was a constant problem, and  punishments were brutal. Soldiers accused of transgressions like drinking, challenging officers, or going AWOL for less than 24 hours were sent to the guardhouse for up to 15 days, where freezing cold, starvation diets, and being forced to stand for 18 hours a day were normal. Up to 15 percent of Soviet conscripts possessed only a basic grasp of Russian. Soldiers were prevented from owning civilian clothes, both to discourage them from going AWOL and make it more difficult for soldiers to illegally acquire alcohol, which shopkeepers were forbidden to sell to soldiers in uniform.

Conscripts were deliberately stationed far from their own communities, both to combat the rampant problem of unmotivated soldiers going AWOL and decrease soldiers’ sympathy for the local population in the event that they were called out to put down civilian unrest. Soldiers stationed in the Eastern European satellites were in effect confined to quarters for their entire terms of service to avoid “cultural contamination”.

Soviet officials went to great lengths to keep conscripts isolated. Radios were often confiscated, soldiers often had no understanding of the global conflict they were an unwilling participant in. From the text:

“The political indoctrination system often fails to provide plausible explanations of international crises that result or might result in the use of Soviet military might. Thus, at least some Soviet soldiers during the 1968 invasion of Czechoslovakia thought they were in West Germany or even in Israel. During the 1968 crisis a Soviet infantry division was alerted and moved to the Sino-Soviet border, but its enlisted men were not told why. Similarly, neither enlisted men or officers of a surface-to-air missile battery alerted during the 1973 US-Soviet showdown were offered any explanation of the international situation.”

The Israel anecdote, in particular, seems incredible, and it’s easy to mock ignorant conscripts for not knowing the obvious differences between Central Europe and Israel. But it makes perfect sense upon reflection. Soviet citizens had the strength and unity of the Warsaw Pact drilled into them all of their life, and the idea that a Communist ally could turn away from the Soviet Union would not be initiative. Of course, to a conscript from a small Siberian town with little knowledge of external politics and no access to images of the outside world, the differences between Czechoslovakia and Israel would not be obvious.