J

Jhrosenberg

637 karmaJoined

Comments
8

“Stripped of all AI-centred argumentation, the reply is left mostly empty.”

The bulk of our funding has gone toward AI-focused forecasting projects (e.g. LEAPAI-bioriskeconomic effects of AI) or ‘automating forecasting research’-type work that has the ultimate goal of assisting decisionmakers (e.g. ForecastBench), so I think this is most of what FRI should be evaluated on. 

“...meaning a much higher hurdle rate would have to be cleared to justify its activities.”

I’m not sure what comparison class people had in mind previously, but I agree it seems broadly correct to consider this work alongside other AI-related funding opportunities. As noted above, I’d argue that it is appropriate and valuable to have “AI measurement” as an important funding domain alongside areas like “AI governance,” “Technical AI safety research,” “AI field-building,” etc. It seems valuable for one part of the AI grantmaking portfolio to be generating evidence that can be used to sharpen views on AI timelines, to assess risk in various domains (bio, cyber, catastrophic risk), to assess magnitudes of benefits (for calibrating cost-benefit analyses on policies), and to predict the likelihood and impact of various policies (e.g. the effectiveness of DNA synthesis screening for biorisk), etc. This type of fundamental research can inform and support more effective action in the other domains. 

I also think forecasting research can have direct impacts on AI governance via direct decision-making partnerships like I described above: i.e., directly partnering with and advising important government agencies and frontier AI companies, among others, on high-stakes decisions related to AI regulation, implementing effective safeguards to reduce AI-cyber risk, and more. We have already seen some early impacts along these lines, as previously mentioned.

“Merely stating that forecasting has informed some policy or that career decisions have been influenced is not sufficient. Similarly, whether its impact is positive or negative is taken at face value and never substantiated.”

I agree. Due to confidentiality, we have primarily shared details of our impact case studies with our funders and had them assess the value of the impact we are making. Establishing evidence of impact publicly is more challenging due to confidentiality considerations. But elsewhere in the thread people have mentioned citations as one reasonable metric for evidence of impact for research organizations that have more diffuse impacts. We have targets for growing our prominent citations over time to assess our impact, and I’ve shared examples of prominent citations to FRI research in my comment above. I also hope that over time, we can share more case studies publicly and provide more of the reasoning for why we believe we had an impact and whether it was positive. The benchmarks RFP case study described above is one example that can be discussed relatively publicly.

“All this isn't to say that judgmental forecasting research or its funding should be dispensed with. In fact, hybrids that combine quantitative predictive models with expert judgment are among the foundational tools of large organisations' decision-making processes. However, I believe the field's association with online betting (high time we called things for what they are) as well as over-reliance on AI for its services is actually hurting it.”

I broadly agree on these points. We are running longitudinal expert panels, partnering with important institutions to improve their decision-making, and automating forecasting research, so I see our work as distinct from online betting/forecasting platforms.

Sharing what context I’m able to: Our work in this space so far has mostly been around assessing both risks and effective safeguards for AI-biorisk and AI-cybersecurity risk.

Superforecasters and domain experts tend to be relatively aligned on these topics so far (e.g., see this study as one example, and a later update here). (We’ve completed private research on AI-cybersecurity risk and will be publishing some of it soon.)

Jhrosenberg
164
14
2
2
6
1
1

[Relevant context/COI: I'm CEO at the Forecasting Research Institute (FRI), an organization which I co-founded with Phil Tetlock and others. Much of the below is my personal perspective, though it is informed by my work. I don't speak for others on my team. I’m sharing an initial reply now, and our team at FRI will share a larger post in future that offers a more comprehensive reflection on these topics.]

Thanks for the post — I think it's important to critically question the value of funds going to forecasting, and this post offers a good opportunity for reflection and discussion.

In brief, I share many of your concerns about forecasting and related research, but I'm also more positive on both its impact so far and its future expected impact.

A summary of some key points:

  1. Much of the impact of forecasting research on specific decision-makers is not public. For example, FRI has informed decisions on frontier AI companies' capability scaling policies, has advised senior US national security decision-makers, and has informed research at key US and UK government agencies. But, we are not able to share many details of this work publicly. However, there is also public evidence that forecasting research is widely cited and informs discourse and some decision-making (some examples below).
  2. AI timelines, adoption, and risk forecasts play a huge role in both individual career decisions and the broader AI discourse. Forecasting research still seems like one of the best tools available for getting specific and accountable beliefs on these topics. For example, comparing 'AI safety' community forecasts to more ‘typical’ experts’ forecasts seems especially important for understanding how much to trust each group’s views. These comparisons will become increasingly relevant for government policymakers over time, especially if there is extremely rapid AI capabilities progress that leads to major societal impacts in the short-run.
  3. When evaluating the impact of FRI-style forecasting research, I think the closest relevant comparison classes are more like broad public goods/measurement-oriented research (e.g., Our World in Data, Epoch) or think-tank research (e.g. GovAI, IAPS). By its nature, the impact of this kind of research tends to be more diffuse and difficult to measure. However, I'd be interested in more intensive comparative evaluation of this type of research and agree that funders should be responsive to evidence about relative impact in these fields.
  4. Forecasting research still has a ton of flaws, and its impact has been far from the dream I've long had for it. There are still big challenges around identifying accurate forecasters on questions related to AI, integrating conditional policy forecasts with actual decision-makers’ needs, and combining deep, individual qualitative research with high-quality, group-generated quantitative forecasts. 
    1. My extremely simplified narrative is: Tetlock et al. established the modern judgmental forecasting field and created a proof of concept for better forecasts on important topics (“superforecasting”)---this work was largely academic; some forecasting platforms were created to build on that work and apply it to a range of important issues; targeted efforts to make forecasting more directly useful to decision-makers are relatively nascent (i.e., have largely begun in the last few years), and are accumulating impact over time, but still have room for improvement.
    2. FRI’s research, in particular, aims to close many of the gaps left by prediction markets and historical forecasting approaches: it is particularly focused on conditional policy forecasts, medium-to-long-run forecasts that do not get much detailed engagement on prediction markets/platforms, and systematically eliciting forecasts from experts who would not typically participate in forecasting platforms but whom decision-makers want to rely on (while also eliciting forecasts from generalists with strong forecasting track records).
  5. However, some factors make the future potential impact of this work look more promising:
    1. AI-enhanced forecasting research is a huge factor that will unlock cheaper, faster, high-quality forecasts on any question of one's choosing. 
    2. The next few years of forecasting AI progress/adoption/impact seem critical, and like they'll deliver a lot of answers on whose forecasts we should trust. It seems good to be ready to support decision-makers during this time.
    3. Leaders in the AI space seem particularly interested in using forecasting in their decision-making; they tend to be both quantitative and open-minded. This creates more potential for forecasting to be useful. More minorly, prediction markets and forecasting are generally becoming more credible within governments. 

More detail on some select points below. This comment already got very long (!), so I’ll reserve more elaboration for a future, more comprehensive post.

Examples of impact

Forecasting research has informed some very important decisions. Unfortunately, many of the details of the relevant evidence here cannot be made public. However, there is evidence of substantial public citation of this research, and some public evidence of affecting particular decisions.

A few examples of relevant impact include:

  • Forecasting has been particularly relevant for decision-making around capability scaling policies. The near-term magnitude of AI-biorisk, how growing AI capabilities may increase it, and what safeguards need to be in place to respond to it, are highly uncertain. Frontier AI companies, the EU AI Code of Practice, and other governments are trying to track and respond to AI impacts on biorisk, cybersecurity, AI R&D, and other domains. We’ve had substantial engagement with the relevant actors, including some focused partnerships, and believe our work in this area has affected important decisions, though we unfortunately cannot share many of the details publicly.
  • Our work on ForecastBench, a benchmark of AI’s ability to do forecasting, showed that AI-produced forecasts could catch up to top human forecasters in roughly the next year if trends persist. This generated interest among senior decision-makers in U.S. national security. We cannot share details, but this is another example of important decision-makers paying attention to and using forecasts.
  • We have completed commissioned research to directly inform grantmaking at Coefficient Giving, and also have indirectly affected grantmaking. For an example of the latter, our work on the Existential Risk Persuasion Tournament (XPT) partially inspired Coefficient Giving (formerly Open Philanthropy) to launch an RFP on improved AI benchmarks. The XPT forecasts predicted that most existing benchmarks would likely saturate in the next few years, and showed that progress on these benchmarks was not crux-y for disagreements about AI impact. We were told that this played a role in the launch and conception of the RFP, and the XPT is cited in the public write-up. 

Some examples of more diffuse impacts — e.g., impact on public understanding of AI and research for policymakers or philanthropists, include:

For context: FRI has been operating for a little over 3 years, and we're accumulating substantially more momentum in terms of connections to top decision-makers as time goes on.

(To be clear: I am mostly discussing FRI here since it’s what I’m most familiar with.)

AI timelines, impact, and adoption forecasts drive a huge amount of career decision-making, attention, etc. 

Forecasts about AI timelines and risk have had major effects on people’s career decisions and the broader AI discourse. AI 2027 underlies popular YouTube videos, 80,000 Hours advises people on career decisions based on timelines forecasts, Dario Amodei’s “country of geniuses in a datacenter by 2027” forecast informs a lot of Anthropic’s work and policy outreach, the AI Impacts survey on AI researchers’ forecasts of existential risk is highly cited, etc.

A major reason I got into this field is that many people are making very intense claims about the effect that AI will have on the world soon, and I want to bring as much rigor and reflection as possible to those claims. So far, it looks like most forecasters are substantially underestimating AI capabilities progress (with some exceptions, e.g. on uplift studies); the evidence on forecasts about AI adoption, societal impacts, and risk is less clear, but I expect we will have more evidence soon, particularly from the Longitudinal Expert AI Panel (LEAP), especially as some forecasters are predicting transformative change in the next few years.

As the expected impact and timing of AI progress is sharpened and clarified, talent and money can be allocated more efficiently.

Case study: Economic impacts of AI

In some cases, it looks to me like forecasting research is picking relatively low-hanging fruit.

The economic impact of AI is a prominent topic of public discussion right now, and it is likely that governments will spend many billions of dollars to address it in the coming years.

Currently, economists hold major sway in public policy about the economic impacts of AI. Perhaps you think top economists, as a group, are badly mistaken about the likely near-term impacts of AI, as some Epoch researchers and others believe. Perhaps you think they are likely to be fairly accurate, as Tyler CowenSéb Krier, or typical economists believe. It seems like a valuable common sense intervention to at least document what various groups believe, so that when we are making economic policy going forward we can rely on that evidence to determine who is trustworthy. I believe that studies like this one (and its follow-ups) will be the clearest evidence on the topic.

Relevant comparison class for forecasting research

When thinking about the impact and cost-effectiveness of forecasting, I think it’s more appropriate to compare this work to public goods-oriented research organizations (e.g., Our World in Data, Epoch, etc.) and policy-oriented think-tank research (e.g. GovAI, IAPS, CSET, etc.).

I’ve been disappointed by most impact evaluation of think-tanks and public goods-oriented research that I’ve seen. I believe this is partly because it is very difficult to quantify the impact of this type of work because it has diffuse benefits. But, I still think it’s possible to do better and I would like FRI to do better on this front going forward. 

That said, I still believe there are reasonable heuristics for why this research area could be highly cost-effective. There are many billions of dollars of philanthropic and government capital being spent on AI policy topics. If there is a meaningful indication that forecasting is changing people’s views on these questions (as I believe there is; see discussion above), it seems reasonable to me to spend a very small fraction of that capital on getting more epistemic clarity.

My critiques of forecasting research

Forecasting research, and FRI’s research in particular, still has major areas for improvement.

Examples of a few key issues:

  • I've been underwhelmed by the accuracy of typical experts and superforecasters on questions about AI capabilities progress (as measured by benchmarks); they often underestimate AI progress (with exceptions). I think this underestimation is a useful fact to document, but it would be much more helpful if our research identified experts you should trust. We're in the process of identifying ‘Top AI forecasters' through LEAP and aim to share updates on this soon.
  • I think forecasting research is at its best when combined with in-depth research reports that provide more narratives and key arguments underlying forecasts. For example, Luca Righetti’s work on estimating (certain kinds of) AI-biorisk provides a lot of valuable analysis that usefully complements our expert panel study on the topic. [Note: Luca is an FRI senior advisor and a co-author of our forecasting study.] For decision-makers to build sufficiently detailed models, and for forecasters to test their arguments, we’d ideally have detailed research like Luca’s on most major topics where we collect forecasts — ideally from a few experts who disagree with each other. Unfortunately, this research often doesn’t readily exist, but we are investigating ways to generate it.
  • I have been somewhat surprised by how few experts in AI industry, AI policy, and other domains predict transformative impacts of AI similar to what are commonly discussed by AI lab leaders, people in the AI safety community, and others. This has made it harder to have a true horse-race between the ‘transformative AI’ school of thought that seems to drive a lot of discourse and decision-making vs. more gradual views of AI impacts. Though we have some transformative AI forecasters in our studies, in future work we aim to explicitly collect more forecasts from the ‘transformative AI’ school of thought in order to set up clearer comparisons between worldviews and to better anticipate what will happen if the ‘transformative AI’ school makes more accurate forecasts.

I will save other thoughts on how forecasting, and FRI’s research, could be made more useful to decision-makers for a future post.

But, to be clear: I have a lot of genuine uncertainty about whether forecasting research will be sufficiently impactful going forward. There are promising signs, and increasing momentum, but to more fully deliver on its promise, more improvements will be necessary.

Some notes on FRI-style forecasting research vs. other forecasting interventions

On the value of FRI-style forecasting research in particular:

  • Prediction markets do not have good ways to collect causal policy forecasts, but in our experience, conditional policy forecasts (e.g., how much would various safeguards reduce AI-cyber risk) are often the most helpful forecasts for decision-makers. 
  • Similarly, prediction markets do not create good incentives for longer run forecasts or low-probability forecasts, and incentivize against sharing the rationales behind forecasts. Directly paying and incentivizing relevant experts and forecasters to answer questions is often more useful.
  • Typical forecasting platforms do not get forecasts from the kinds of experts that policymakers typically rely on, and aren't the kind of evidence that can easily be cited in government reports. (This may be unfortunate, but it is the current state of the world.)

Reasons for optimism about future impact

Finally, there are a few factors that have the potential to dramatically change the field going forward:

  1. It looks like AI may soon make it >100x cheaper and faster to get high-quality forecasts on any topic of one’s choosing. Policy researchers will be able to ask the precise question they’re interested in, will be able to upload confidential documents to inform forecasts (something we’ve heard is especially important to decision-makers), and will be able to get detailed explanations for all forecasts. AI-produced forecasts will also be much easier to test for accuracy due to the volume of forecasts they can provide, and it will be easier to generate ‘crux’ questions since AI will not get bored of producing huge numbers of conditional forecasts (which are necessary for identifying cruxes). Building benchmarks and tooling to harness AI-produced forecasts will be a much larger part of our work going forward.
  2. The next few years seem very unusual in human history: very thoughtful researchers are predicting “Superhuman Coders” by 2029, with attendant large impacts. There is a spectrum of views, but the scope for disagreement among reasonable people about what the world will look like in 2030 is huge. This is a particularly important time to make predictions testable, update on what we observe, and make better policy and personal decisions on the basis of this information.
  3. People working in the AI space seem particularly interested in using forecasting, perhaps due to a mix of being quantitatively oriented and because they’re facing unusual degrees of uncertainty. This bodes well for forecasting being useful in the coming years. More minorly, it appears that there is a broader cultural change around forecasting-related topics. Prediction markets are increasingly being cited by government officials, and the public is paying more attention to them than ever before. Much of the impact for prediction markets specifically seems negative (e.g. via incentivizing gambling on low-value topics), but the broader cultural shift suggests there may be an opportunity for better uses of forecasting to enter public consciousness as well.

Thanks, Ryan, this is great. These are the kinds of details we are hoping for in order to inform future operationalizations of “AI takeover” and “existential catastrophe” questions. 

For context: We initially wanted to keep our definition of “existential catastrophe” closer to Ord’s definition, but after a few interviews with experts and back-and-forths we struggled to get satisfying resolution criteria for the “unrecoverable dystopia” and (especially) “destruction of humanity’s longterm potential” aspects of the definition. Our ‘concerned’ advisors thought the “extinction” and “unrecoverable collapse” parts would cover enough of the relevant issues and, as we saw in the forecasts we’ve been discussing, it seems like it captured a lot of the risk for the ‘concerned’ participants in this sample. But, we’d like to figure out better operationalizations of “AI takeover” or related “existential catastrophes” for future projects, and this is helpful on that front.

Broadly, it seems like the key aspect to carefully operationalize here is “AI control of resources and power.” Your suggestion here seems like it’s going in a helpful direction: 

“One way to operationalize this is that if the AIs in control wanted to kill or torture humans, they could easily do so.”

We’ll keep reflecting on this, and may reach out to you when we write “takeover”-related questions for our future projects and get into the more detailed resolution criteria phase.

Thanks for taking the time to offer your detailed thoughts on the outcomes you’d most like to see forecasted.

Hi Ryan,

Thanks for the comment! 

Regarding “extinction”

Just to be clear, the primary outcome we looked at (after considering various definitions and getting agreement from some key ‘concerned’ people) was “existential catastrophe,” defined as either extinction or “unrecoverable collapse,” with the latter defined as “(a) a global GDP of less than $1 trillion annually in 2022 dollars for at least a million years (continuously), beginning before 2100; or (b) a human population remaining below 1 million for at least a million years (continuously), beginning before 2100.”

However, we also sanity checked (see p. 14) our findings by asking about the probability that more than 60% of humans would die within a 5-year period before 2100. The median concerned participant forecasted 32%, and the median skeptic forecasted 1%. So, this outcome was considered much more likely by skeptics (median of 1% vs. 0.12% for existential catastrophe). But, a very large gap between the groups still existed. And it also did not seem that focusing on this alternative outcome made a major difference to crux rankings when we collected a small amount of data on it. So, for the most part we focus on the “existential catastrophe” outcome and expect that most of the key points in the debate would still hold for somewhat less extreme outcomes (with the exception of the debate about how difficult it is to kill literally everyone, though that point is relevant to at least people who do argue for high probabilities on literal extinction).

We also had a section of the report ("Survey on long-term AI outcomes") where we asked both groups to consider other severe negative outcomes such as major decreases in human well-being (median <4/10 on an "Average Life Evaluation" scale) and 50% population declines. 

Do you have alternative “extremely bad” outcomes that you wish had been considered more?

Regarding “displacement” (footnote 10 on p. 6 for full definition):

We added this question in part because some participants and early readers wanted to explore debates about “AI takeover,” since some say that is the key negative outcome they are worried about rather than large-scale death or civilizational collapse. However, we found this difficult to operationalize and agree that our question is highly imperfect; we welcome better proposals. In particular, as you note, our operationalization allows for positive ‘displacement’ outcomes where humans choose to defer to AI advisors and is ambiguous in the ‘AI merges with humans’ case.

Your articulations of extremely advanced AI capabilities and energy use seem useful to ask about also, but do not directly get at the “takeover” question as we understood it.

Nevertheless, our existing ‘displacement’ question at least points to some major difference in world models between the groups, which is interesting even if the net welfare effect of the outcome is difficult to pin down. A median year for ‘displacement’ (as currently defined) of 2045 for the concerned group vs. 2450 for the skeptics is a big gap that illustrates major differences in how the groups expect the future to play out. This helped to inspire the elaboration on skeptics’ views on AI risk in the “What long-term outcomes from AI do skeptics expect?” section.

Finally, I want to acknowledge that one of the top questions we wished we asked related to superintelligent-like AI capabilities. We hope to dig more into this in follow-up studies and will consider the definitions you offered.

Thanks again for taking the time to consider this and propose operationalizations that would be useful to you!

(Below written by Peter in collaboration with Josh.)

It sounds like I have a somewhat different view of Knightian uncertainty, which is fine—I’m not sure that it substantially affects what we’re trying to accomplish. I’ll simply say that, to the extent that Knight saw uncertainty as signifying the absence of “statistics of past experience,” nuclear war strikes me as pretty close to a definitional example. I think we make the forecasting challenge easier by breaking the problem into pieces, moving us closer to risk. That’s one reason I wanted to add conventional conflict between NATO and Russia as an explicit condition: NATO has a long history of confronting Russia and, by and large, managed to avoid direct combat.

By contrast, the extremely limited history of nuclear war does not enable us to validate any particular model of the risk. I fear that the assumptions behind the models you cite may not work out well in practice and would like to see how they perform in a variety of as-similar-as-possible real world forecasts. That said, I am open to these being useful ways to model the risk. Are you aware of attempts to validate these types of methods as applied to forecasting rare events?

On the ignorance prior: 

I agree that not all complex, debatable issues imply probabilities close to 50-50. However, your forecast will be sensitive to how you define the universe of "possible outcomes" that you see as roughly equally likely from an ignorance prior. Why not define the possible outcomes as: one-off accident, containment on one battlefield in Ukraine, containment in one region in Ukraine, containment in Ukraine, containment in Ukraine and immediately surrounding countries, etc.? Defining the ignorance prior universe in this way could stack the deck in favor of containment and lead to a very low probability of large-scale nuclear war. How can we adjudicate what a naive, unbiased description of the universe of outcomes would be?

As I noted, my view of the landscape is different: it seems to me that there is a strong chance of uncontrollable escalation if there is direct nuclear war between Russia and NATO. I agree that neither side wants to fight a nuclear war—if they did, we’d have had one already!— but neither side wants its weapons destroyed on the ground either. That creates a strong incentive to launch first, especially if one believes the other side is preparing to attack. In fact, even absent that condition, launching first is rational if you believe it is possible to “win” a nuclear war, in which case you want to pursue a damage-limitation strategy. If you believe there is a meaningful difference between 50 million dead and 100 million dead, then it makes sense to reduce casualties by (a) taking out as many of the enemy’s weapons as possible; (b) employing missile defenses to reduce the impact of whatever retaliatory strike the enemy manages; and (c) building up civil defenses (fallout shelters etc.) such that more people survive whatever warheads survive (a) and (b). In a sense “the logic of nuclear war” is oxymoronic because a prisoner’s dilemma-type dynamic governs the situation such that, even though cooperation (no war) is the best outcome, both sides are driven to defect (war). By taking actions that seem to be in our self-interest we ensure what we might euphemistically call a suboptimal outcome. When I talk about “strategic stability,” I am referring to a dynamic where the incentives to launch first or to launch-on-warning have been reduced, such that choosing cooperation makes more sense. New START (and START before it) attempts to boost strategic stability by establishing nuclear parity (at least with respect to strategic weapons). But its influence has been undercut by other developments that are de-stabilizing. 

Thank you again for the thoughtful comments, and I’m happy to engage further if that would be clarifying or helpful to future forecasting efforts.

Thanks for the reply and the thoughtful analysis, Misha and Nuño, and please accept our apologies for the delayed response. The below was written by Peter in collaboration with Josh.

First, regarding the Rodriguez estimate, I take your point about the geometric mean rather than arithmetic mean and that would move my probability of risk of nuclear war down a bit — thanks for pointing that out. To be honest, I had not dug into the details of the Rodriguez estimate and was attempting to remove your downward adjustment from it due to "new de-escalation methods" since I was not convinced by that point. To give a better independent estimate on this I'd need to dig into the original analysis and do some further thinking of my own. I'm curious: How much of an adjustment were you making based on the "new de-escalation methods" point?

Regarding some of the other points:

  • On "informed and unbiased actors": I agree that if someone were following Rob Wiblin's triggers, they'd have a much higher probability of escape. However, I find the construction of the precise forecasting question somewhat confusing and, from context, had been interpreting it to mean that you were considering the probability that informed and unbiased actors would be able to escape after Russia/NATO nuclear warfare had begun but before London had been hit, which made me pessimistic because that seems like a fairly late trigger for escape. However, it seems that this was not your intention. If you're assuming something closer to Wiblin's triggers before Russia/NATO nuclear warfare begins, I'd expect greater chance of escape like you do. I would still have questions about how able/willing such people would be to potentially stay out of London for months at a time (as may be implied by some of Wiblin's triggers) and what fraction of readers would truly follow that protocol, though. As you say, perhaps it makes most sense for people to judge this for themselves, but describing the expected behavior in more detail may help craft a better forecasting question.
  • On reasons for optimism from "post-Soviet developments": I am curious what, besides the New START extension, you may be thinking of getting others' views on. From my perspective, the New START extension was the bare minimum needed to maintain strategic predictability/transparency. It is important, but (and I say this as someone who worked closely on Senate approval of the treaty) it did not fundamentally change the nuclear balance or dramatically improve stability beyond the original START. Yes, it cut the number of deployed strategic warheads, which is significant, but 1,550 on each side is still plenty to end civilization as we know it (even if employed against only counterforce targets). The key benefit to New START was that it updated the verification provisions of the original START treaty, which was signed before the dissolution of the Soviet Union, so I question whether it should be considered a "post-Soviet development" for the purposes of adjusting forecasts relative to that era. START (and its verification provisions) had been allowed to lapse in December 2009, so the ratification of New START was crucial, but the value of its extension needs to be considered against the host of negative developments that I briefly alluded to in my response.

Peter says: No, I live in Washington, DC a few blocks from the White House, and I’m not suggesting evacuation at the moment because I think conventional conflict would precede nuclear conflict. But if we start trading bullets with Russian forces, odds of nuclear weapons use goes up sharply. And, yes, I do believe risk is higher in Europe than in the United States. But for the moment, I’d happily attend a conference in London.