1. This one is very in the weeds, but I was very confused about some conflicting results Pinker and Braumoeller get in testing for the hypothesis of a break in war incidence after 1945. Pinker (2011: 252) writes: "Taking the frequency of wars between great powers from 1495 to 1945 as a baseline, the chance that there would be a sixty-five year stretch with only a single great power war (the marginal case of the Korean War) is one in a thousand. Even if we take 1815 as our starting point, which biases the test against us by letting the peaceful post-Napoleonic 19th century dominate the base rate, we find that the probability that the postwar era would have at most four wars involving a great power is less than 0.004, and the probability that it would have at most one war between European states (the Soviet invasion of Hungary in 1956) is 0.0008." Braumoeller (2019: 27-8) gets different results by modelling the onset of great power war in a given year as a binomial distribution with p = 0.02, based on the rate of great power war in the last five centuries: “the probability of observing seven continuous decades of peace …. is 24.3%.” (28) He also writes: “it would still take about 150 years of uninterrupted peace for us to reject conclusively the claim that the underlying probability of systemic war remains unchanged.” (28) Both Pinker and Braumoeller are relying primarily on Levy ([War in the Modern Great Power System] 1983) to estimate the rate of great power war, so I don’t understand why they get such radically different results. What's going on?
2. Battlefield deaths generally do not count civilians killed directly or indirectly as a result of military conflict. Apparently, it is extremely difficult to reliably measure total excess mortality due to war, and as a result battlefield deaths are used as the standard measure (Pinker 2011: 299-300; Braumoeller 2019: 101). At the same time, authors like Kaldor ([New and Old Wars] 1999) argue that civilian deaths have increased significantly as a share of all war deaths, with civilians now typically the majority of those killed as a result of war, and Roberts ['Lives and statistics'] records estimates that roughly 40% of casualties in Bosnia-Herzegovina, 1991-5 were civilians, and between 75-83% in the Second Gulf War. Given that we do not have reliable data for so important a part of the overall picture, are contemporary debates on trends in the severity/intensity/prevalence of battle deaths of the kind between Pinker and Braumoeller actually telling us very much at all about whether wars are getting better or worse as a 'public health problem'?
3. Braumoeller (2019: 179) asserts that “[t]he four decades following the Napoleonic Wars were, by a significant margin, the most peaceful period on record in Europe.” I didn't feel he said very much to explain the grounds of this assertion in the book. On what basis can it be said that the period between the Congress of Vienna and the outbreak of the Crimean War was significantly more peaceful than that between the end of World War II and Perestroika? I don't know the former period well, but just looking at the list of conflicts in Europe during these periods from Wikipedia, this didn't seem to me especially plausible. (Possibly I'm just misunderstanding what he's saying, and the claim is that the decades after the Napoleonic Wars were a lot more peacefully than any before then.)
Great post! Like MichaelA, I'd be really interested in something systematic on the reversal of century-long trends in history.
With respect to the 'outside view' approach, I wondered what you would make of the rejoinder that actually over the very long autocracy is the outlier - provided that we include hunter-gatherers?
On the view I take to be associated with Christopher Boehm's work, ancestral foragers are believed to have exhibited the fierce resistane to political hierarchy characteristic of mobile foragers in the ethnographic record, relying on consensus-seeking as a means of collective decision-making. In some sense, this could be taken to indicate that human beings have lived without autocracy and with something that could be described as vaguely democratic throughout virtually all of its history. Boehm writes: "before twelve thousand years ago, humans basically were egalitarian. They lived in what might be called societies of equals, with minimal political centralization and no social classes. Everyone participated in group decisions, and outside the family there were no dominators" (Hierarchy in the Forest pp. 3-4)
Obviously, you can make the rejoinder that the relevant reference class should be 'states' and so shouldn't include acephalous hunter-gatherer bands, but by the same logic I take it someone could claim that the reference class should be narrowed further to 'industrialised states' when we make our outside view forecast about how long democracy will be popular. The difficulty of fixing the appropriate reference class here seems to me to raise doubts about how much epistemic value can be derived from base rates and seems to require predictions to be based more firmly in the sorts of causal questions that are focal later in your post: understanding why hunter-gatherer bands are egalitarian, agrarian states aren't, and industrialized economies have tended to be.
David Thorstad and I are currently writing a paper on the tools of Robust Decision Making (RDM) developed by RAND and the recommendation to follow a norm of 'robust satisficing' when framing decisions using RDM. We're hoping to put up a working paper on the GPI website soon (probably about a month). Like you, our general sense is that the DMDU community is generating a range of interesting ideas, and the fact that these appeal to those at (or nearer) the coalface is a strong reason to take them seriously. Nonetheless, we think more needs to be said on at least two critical issues.
Firstly, measures of robustness may seem to smuggle probabilities in via the backdoor. In the locus classicus for discussions of robustness as a decision criterion in operations research, Rosenhead, Elton, and Gupta note that using their robustness criterion is equivalent to maximizing expected utility with a uniform probability distribution given certain assumptions about the agent's utility function. Similarly, the norm of robust satisficing invoked by RDM is often identified as a descendant of Starr's domain criterion, which relies on a uniform (second-order) probability distribution. To our knowledge, the norm of robust satisficing appealed to in RDM has not been stated with the formal precision adopted by Starr or Rosenhead, Elton, and Gupta, but given its pedigree, it's natural to worry that it ultimately relies implicitly on a uniform probability measure of some kind. But this seems at odds with the skepticism toward (unique) probability assignments that we find in the DMDU literature, which you note. Secondly, we wonder why a concern for robustness in the face of deep uncertainty should lead to adoption of a satisficing criterion of choice. In expositions of RDM, robustness and optimizing are frequently contrasted, and a desire for robustness is linked to satisficing choice. But it's not clear what the connection is here. Why satisfice? Why not seek out robustly optimal strategies? That's basically what Starr's domain criterion does - it looks for the act that maximizes expected utility relative to the largest share of the probability assignments consistent with our evidence.