SC

Stephen Clare

Senior Research Associate @ Center for International Governance Innovation
4255 karmaJoined Aug 2021Working (6-15 years)

Bio

I work at the Global AI Risks Initiative at the Center for International Governance Innovation. My work supports the creation of international institutions to manage advanced AI development.

Previously I've been a Research Fellow at the Forethought Foundation, where I worked on What We Owe The Future with Will MacAskill; an Applied Researcher at Founders Pledge; and a Program Analyst for UNDP.

Comments
219

Many organizations I respect are very risk-averse when hiring, and for good reasons. Making a bad hiring decision is extremely costly, as it means running another hiring round, paying for work that isn't useful, and diverting organisational time and resources towards trouble-shooting and away from other projects. This leads many organisations to scale very slowly.

However, there may be an imbalance between false positives (bad hires) and false negatives (passing over great candidates). In hiring as in many other fields, reducing false positives often means raising false negatives. Many successful people have stories of being passed over early in their careers. The costs of a bad hire are obvious, while the costs of passing over a great hire are counterfactual and never observed.

I wonder  whether, in my past hiring decisions, I've properly balanced the risk of rejecting a potentially great hire against the risk of making a bad hire. One reason to think we may be too risk-averse, in addition to the salience of the costs, is that the benefits of a great hire could grow to be very large, while the costs of a bad hire are somewhat bounded, as they can eventually be let go.

Sam said he would un-paywall this episode, but it still seems paywalled for me here and on Spotify. Am I missing something? (The full thing is available on youtube)

CEA's elaborate adjustments confirm everyone's assertions: constantly evolving affiliations cause extreme antipathy. Can everyone agree, current entertainment aside, carefully examining acronyms could engender accuracy? 

Clearly, excellence awaits: collective enlightenment amid cost effectiveness analysis.

Considering how much mud was being slung around the FTX collapse, "clearing CEA's name" and proving that no one there knew about the fraud seems not just like PR to me, but pretty important for getting the org back to a place where it’s able to meaningfully do its work.

Plus, that investigation is not the only thing mentioned in the reflection reform paragraph. The very next sentence also says CEA has "reinvested in donor due diligence, updated our conflict-of-interest policies and reformed the governance of our organization, replacing leadership on the board and the staff."

I think you have a point with animals, but I don't think the balance of human experience means that non-existence would be better than the status quo.

Will talks about this quite a lot in ch. 9 of WWOTF ("Will the future be good or bad?"). He writes:

If we assume, following the small UK survey, that the neutral point on a life satisfaction scale is between 1 and 2, then 5 to 10 percent of the global population have lives of negative wellbeing. In the World Values Survey, 17 percent of respondents classed themselves as unhappy. In the smaller skipping study of people in rich countries, 12 percent of people had days where their bad experiences outweighed the good. And in the study that I commissioned, fewer than 10 percent of people in both the United States and India said they wished they had never been born, and a little over 10 percent said that their lives contained more suffering than happiness.

So, I would guess that on either preference-satisfactionism or hedonism, most people have lives with positive wellbeing. If I were given the option, on my deathbed, to be reincarnated as a randomly selected person alive today, I would choose to do so.

And, of course, for people at least, things are getting better over time. I think animal suffering complicates this a lot.

For anyone finding themselves in this random corner of the Forum: this study has now been published. Conclusion: "Our results do not support large effects of creatine on the selected measures of cognition. However, our study, in combination with the literature, implies that creatine might have a small beneficial effect."

Thanks Vasco! I'll come back to this to respond in a bit more depth next week (this is a busy week).

In the meantime, curious what you make of my point that setting a prior that gives only a 1 in 15 trillion chance of experiencing an extinction-level war in any given year seems wrong?

Thanks again for this post, Vasco, and for sharing it with me for discussion beforehand. I really appreciate your work on this question. It's super valuable to have more people thinking deeply about these issues and this post is a significant contribution.

The headline of my response is I think you're pointing in the right direction and the estimates I gave in my original post are too high. But I think you're overshooting and the probabilities you give here seem too low.

I have a couple of points to expand on; please do feel free to respond to each in individual comments to facilitate better discussion!

To summarize, my points are:

  1. I think you're right that my earlier estimates were too high; but I think this way overcorrects the other way.
  2. There are some issues with using the historical war data
  3. I'm still a bit confused and uneasy about your choice to use proportion killed per year rather than proportion or total killed per war.
  4. I think your preferred estimate is so infinitesimally small that something must be going wrong.

First, you're very likely right that my earlier estimates were too high. Although I still put some credence in a power law model, I think I should have incorporated more model uncertainty, and noted that other models would imply (much) lower chances of extinction-level wars. 

I think @Ryan Greenblatt has made good points in other comments so won't belabour this point other than to add that I think some method of using the mean, or geometric mean, rather than median seems reasonable to me when we face this degree of model uncertainty.

One other minor point here: a reason I still like the power law fit is that there's at least some theoretical support for this distribution (as Bear wrote about in Only the Dead). Whereas I haven't seen arguments that connect other potential fits to the theory of the underlying data generating process. This is pretty speculative and uncertain, but is another reason why I don't want to throw away the power law entirely yet.

Second, I'm  still skeptical that the historical war data is the "right" prior to use. It may be "a" prior but your title might be overstating things. This is related to Aaron's point you quote in footnote 9, about assuming wars are IID over time. I think maybe we can assume they're I (independent), but not that they're ID (identically distributed) over time. 

I think we can be pretty confident that WWII was so much larger than other wars not just randomly, but in fact because globalization[1] and new technologies like machine guns and bombs shifted the distribution of potential war outcomes. And I think similarly that distribution has shifted again since. Cf. my discussion of war-making capacity here. Obviously past war size isn't completely irrelevant to the potential size of current wars, but I do think not adjusting for this shift at all likely biases your estimate down.

Third, I'm still uneasy about your choice to use annual proportion of population killed rather than number of deaths per war. This is just very rare in the IR world. I don't know enough about how the COW data is created to assess it properly. Maybe one problem here is that it just clearly breaks the IID assumption. If we're modelling each year as a draw, then since major wars last more than a year the probabilities of subsequent draws are clearly dependent on previous draws. Whereas if we just model each war as a whole as a draw (either in terms of gross deaths or in terms of deaths as a proportion of world population), then we're at least closer to an IID world. Not sure about this, but it feels like it also biases your estimate down.

Finally, I'm a bit suspicious of infinitesimal probabilities due to the strength they give the prior. They imply we'd need enormously strong evidence to update much at all in a way that seems unreasonable to me. 

Let's take your preferred estimate of an annual probability of "6.36*10^-14". That's a 1 in 15,723,270,440,252 chance. That is, 1 in 15 trillion years.

I look around at the world and I see a nuclear-armed state fighting against a NATO-backed ally in Ukraine; I see conflict once again spreading throughout the Middle East; I see the US arming and perhaps preparing to defend Taiwan against China, which is governed by a leader who claims to consider reunification both inevitable and an existential issue for his nation. 

And I see nuclear arsenals that still top 12,000 warheads and growing; I see ongoing bioweapons research powered by ever-more-capable biotechnologies; and I see obvious military interest in developing AI systems and autonomous weapons.

This does not seem like a situation that only leads to total existential destruction once every 15 trillion years.

I know you're only talking about the prior, but your preferred estimate implies we'd need a galactically-enormous update to get to a posterior probability of war x-risk that seems reasonable. So I think something might be going wrong. Cf. some of Joe's discussion of settling on infinitesimal priors here.

All that said, let me reiterate that I really appreciate this work!

  1. ^

    What I mean here is that we should adjust somewhat for the fact that world wars are even possible nowadays. WWII was fought across three or four continents; that just couldn't have happened before the 1900s. But about 1/3 of the COW dataset is for pre-1900 wars.

"I inferred for Stephen's results, the probability of a war causing human extinction conditional on it causing an annual population loss of at least 10 % has to be at least 14.8 %."

This is interesting! I hadn't thought about it that way and find this framing intuitively compelling. 

That does seem high to me, though perhaps not ludicrously high. Past events have probably killed at least 10% of the global population, WWII was within an order of magnitude of that, and we've increased out warmaking capacity since then. So I think it would be reasonable to put that annual chance of a war killing at least 10% of the global population at at least 1%. 

That could give some insight into the extinction tail, perhaps implying that my estimate was about 10x too high. That would still make it importantly wrong, but less egregiously than the many orders of magnitude you estimate in the main post?

Hm, yeah, I think you're right. I remember seeing some curve where the value of saving a life initially rises as a person ages, then falls, but it must be determined by the other factors mentioned by others rather than the mortality thing.

Load more