(iv) Roodman’s review of incarceration notes that multiple studies find strong evidence that incapacitation works to prevent crime.
But that's tautological or close to tautological given Roodman's definition of incapacitation (p. 10, Sec. 2.2.2):
Because prison crime is so rarely studied, and because crime outside of prison has particular political salience, “incapacitation” in this review will refer to crime outside of prison.
(I say "close to tautological" because some crimes committed by someone in prison, e.g. mail fraud, might nonetheless be considered crimes outside of prison.)
If we consider total crime, not just reported crime or just crime outside of prison, then the quoted incapacitation figures will be overstated. Indeed, it's quite plausible that the full incapacitation effect is negative on some margins. So for instance consider someone in jail for a relatively short period of a week or a month. I would guess their risk of being a victim of crime (theft, assault, rape) is typically higher than the risk of them committing or being a victim of a crime outside of jail over the same period.
(with fixed formatting)
I agree with this. Estimates vary, but there is something like $40 trillion of ESG assets (for comparison, the total market value of all crypto currencies is currently under a trillion dollars). So there is a lot of potential for impact.
The focus of discussion on ESG (including in this piece) is typically on which shares to own or not own, neglecting an important potential lever of change: how to vote the shares. An interesting example of an ESG fund in this regard is the ETF VOTE that owns all 500 stocks in its underlying index, but seeks to make the world a better place by voting for better corporate behavior. Some ESG funds do both, but many ESG funds don't make an effort to vote their shares in an ESG way. So, for example, Vanguard votes its ESG funds exactly the same way it votes its regular funds.
I agree with this. Estimates vary, but there is something like <a href="https://www.pionline.com/esg/global-esg-data-driven-assets-hit-405-trillion">$40 trillion of ESG assets</a> (for comparison, the total market value of all crypto currencies is currently under a trillion dollars). So there is a lot of potential for impact.
The focus of discussion on ESG (including in this piece) is typically on which shares to own or not own, neglecting an important potential lever of change: how to vote the shares. An interesting example of an ESG fund in this regard is the ETF <a href="https://etf.engine1.com/vote">VOTE</a> that owns all 500 stocks in its underlying index, but seeks to make the world a better place by voting for better corporate behavior. Some ESG funds do both, but many ESG funds don't make an effort to vote their shares in an ESG way. So, for example, Vanguard votes its ESG funds exactly the same way it votes its regular funds.
Your food resilience work is great: fascinating and really important! Indeed, I first heard of your supervolcano paper via your interview with Rob Wiblin which was primarily about feeding humanity after a catastrophe. In the grand scheme of things, that's rightly higher priority, but the supervolcano stuff also caught my interest.
I happen to know a couple of volcanologists, so I asked them about your paper. They weren't familiar with it, but independently stressed that something quite tractable that would benefit from more resources is better monitoring of volcanoes and prediction of eruptions.
The typical application of forecasting eruptions is evacuation. But that's sociologically tricky when you inevitably have probabilities far from 1 and uncertain timelines, since an evacuation that ends up appearing unnecessary will lead to low compliance later (the volcanologists "cried wolf"). With interventions to prevent an eruption, that's much less of an issue. Say you had a forecast that a certain supervolcano has a probability of 20% of erupting in the next century, so many orders of magnitude above base rate. That's still realistically pretty useless from the point of view of evacuation, but would make your kind of interventions very attractive (if they work in that case).
So if it could shown that these interventions are likely tractable even when a potential near term eruption has been detected, then that would justify increased investment both in detection/forecasting and developing these approaches.
That was a really interesting paper!
Has there been any follow up work by you or others to refine your risk estimates, in particular to estimate the change to hazard rate?
So for example, you consider covering Yellowstone with 25 cm of unconsolidated material as a way to delay the next eruption and give us time to develop technology for a more permanent solution over the next, say, 50 or 100 years. You estimate that intervention increases the expected value (EV) of the time to the next eruption by 100 years. So that's great, but I think what we really care about is something more like the hazard rate over the near term: what is the probability of preventing an eruption over next 50 or 100 years ? If the rate at which the pressure in the magma chamber increases is roughly constant, this distinction doesn't really matter and a 100 year increase in EV means an eruption in the next 50 years is much less likely. But if it's very far from uniform, the 100 year increase in EV might not be as great as it sounds. So e.g. say the process is driven by large jumps in pressure on a timescale of every 1000 years or so, then increasing the EV by 100 years is only decreasing the hazard rate by 10%: an eruption in the near term is still 90% as likely after the intervention as before.
Another consideration is are the dynamics any different between intervening at a random time vs. intervening when there are signs an eruption may be soon (but still enough time to complete the intervention)?
Thanks for the reply. I agree that's a natural tentative interpretation of Table 26, taken in isolation. But note that table doesn't give any indication of confidence intervals for the relevant column.
Have a look at Table 11 (below). We see the same numbers (up to rounding) in the final row, with 90% confidence intervals. Note the negative point estimates for rape and assault are very close to the center of their confidence intervals and thus not distinguishable from zero. Basically, there was a natural experiment in which California reduced the prisoner population and for those two categories, on a relative basis to (most of) the rest of the country, crime happened to decrease very slightly, but only to an extent well within the statistical noise. (In fact of the seven categories, only motor vehicle theft is significant -- and barely so -- at the p=0.1 level and none is significant at the conventional p=0.05 level.)
Note the crime numbers being used here are inferred from official crimes reported, rescaled using national estimates of the reporting rate (which helps to put e.g. murder and larceny on the same footing, despite their very different reporting rates). Since only a tiny fraction of crimes in prison are reported (at least that's my sense), that means that crimes in prisons and jails are essentially being ignored (as Roodman states in his definition of incapacitation I initially cited).
The bottom line as I see it: If someone wants to do a CEA analysis of an intervention in this space, they should think carefully about the incapacitation term as sources (Roodman and I suspect the underlying literature) will tend to exclude crimes in prisons and jails. In fairness, it's likely hard to get good estimates, but things don't fail to be real because they are hard to estimate.