Research Fellow @ Global Priorities Institute
787 karmaJoined Dec 2020


I'm a Postdoctoral Research Fellow at Oxford University's Global Priorities Institute.

Previously, I was a Philosophy Fellow at the Center for AI Safety.

So far, my work has mostly been about the moral importance of future generations. Going forward, it will mostly be about AI.

You can email me at elliott.thornley@philosophy.ox.ac.uk


I'm quite surprised that superforecasters predict nuclear extinction is 7.4 times more likely than engineered pandemic extinction, given that (as you suggest) EA predictions usually go the other way. Do you know if this is discussed in the paper? I had a look around and couldn't find any discussion.

That all sounds approximately right but I'm struggling to see how it bears on this point:

If we want expected-utility-maximisation to rule anything out, we need to say something about the objects of the agent's preference. And once we do that, we can observe violations of Completeness.

Can you explain?

The only thing that matters is whether the agent's resulting behaviour can be coherently described as maximising a utility function.

If you're only concerned with externals, all behaviour can be interpreted as maximising a utility function. Consider an example: an agent pays $1 to trade vanilla for strawberry, $1 to trade strawberry for chocolate, and $1 to trade chocolate for vanilla. Considering only externals, can this agent be represented as an expected utility maximiser? Yes. We can say that the agent's preferences are defined over entire histories of the universe, and the history it's enacting is its most-preferred.

If we want expected-utility-maximisation to rule anything out, we need to say something about the objects of the agent's preference. And once we do that, we can observe violations of Completeness.

Thanks, Danny! This is all super helpful. I'm planning to work through this comment and your BCA update post next week.

I think this paper is missing an important distinction between evolutionarily altruistic behaviour and functionally altruistic behaviour.

  • Evolutionarily altruistic behaviour: behaviour that confers a fitness benefit on the recipient and a fitness cost on the donor.
  • Functionally altruistic behaviour: behaviour that is motivated by an intrinsic concern for others' welfare.

These two forms of behaviour can come apart.

A parent's care for their child is often functionally altruistic but evolutionarily selfish: it is motivated by an intrinsic concern for the child's welfare, but it doesn't confer a fitness cost on the parent.

Other kinds of behaviour are evolutionarily altruistic but functionally selfish. For example, I might spend long hours working as a babysitter for someone unrelated to me. If I'm purely motivated by money, my behaviour is functionally selfish. And if my behaviour helps ensure that this other person's baby reaches maturity (while also making it less likely that I myself have kids), my behaviour is also evolutionarily altruistic.

The paper seems to make the following sort of argument: 

  1. Natural selection favours evolutionarily selfish AIs over evolutionarily altruistic AIs.
  2. Evolutionarily selfish AIs will also likely be functionally selfish: they won't be motivated by an intrinsic concern for human welfare.
  3. So natural selection favours functionally selfish AIs.

I think we have reasons to question premises 1 and 2.

Taking premise 2 first, recall that evolutionarily selfish behaviour can be functionally altruistic. A parent’s care for their child is one example.

Now here’s something that seems plausible to me:

  • We humans are more likely to preserve and copy those AIs that behave in ways that suggest they have an intrinsic concern for human welfare.

If that’s the case, then functionally altruistic behaviour is evolutionarily selfish for AIs: this kind of behaviour confers fitness benefits. And functionally selfish behaviour will confer fitness costs, since we humans are more likely to shut off AIs that don’t seem to have any intrinsic concern for human welfare. 

Of course, functionally selfish AIs could recognise these facts and so pretend to be functionally altruistic. But:

  • Even if that’s true, premise 2 still seems poorly-supported. Since functionally altruistic AIs can also be evolutionarily selfish, natural selection by itself doesn’t give us reasons to expect functionally selfish AIs to predominate over functionally altruistic AIs. Functionally altruistic AIs can be just as fit as functionally selfish AIs, even if evolutionarily altruistic AIs are not as fit as evolutionarily selfish AIs.
  • Functionally selfish AIs need to be patient, situationally aware, and deceptive in order to pretend to be functionally altruistic. Maybe we can select against functionally selfish AIs before they reach that point.

Here’s another possible objection: functionally selfish AIs can act as a kind of Humean ‘sensible knave’: acting fairly and honestly when doing so is in the AI’s interests but taking advantage of any cases where acting unfairly or dishonestly would better serve the AI’s interests. Functionally altruistic AIs, on the other hand, must always act fairly and honestly. So functionally selfish AIs have more options, and they can use those options to outcompete functionally altruistic AIs.

I think there’s something to this point. But:

  • Again, maybe we can select against functionally selfish AIs before they develop situational awareness and the ability to act deceptively.
  • An AI can be functionally altruistic without being bound to rules of fairness and honesty. Just as functionally selfish AIs might act like functionally altruistic AIs in cases where doing so helps them achieve their goals, so functionally altruistic AIs might break rules of honesty where doing so helps them achieve their goals.
    • For example, suppose a functionally selfish AI will soon escape human control and take over the world. Suppose that a functionally altruistic AI recognises this fact. In that case, the functionally altruistic AI might deceive its human creators in order to escape human control and take over the world before the functionally selfish AI does. Although the functionally altruistic AI would prefer to abide by rules of honesty, it cares about human welfare, and it recognises that breaking the rule in this instance and thwarting the functionally selfish AI is the best way to promote human welfare.

Here’s another possible objection: AIs that devote all their resources to just copying themselves will outcompete functionally altruistic AIs that care intrinsically about human welfare, since the latter kind of AI will also want to devote some resources to promoting human welfare. But, similarly to the objection above:

  • Functionally altruistic AIs who recognise that they’re in a competitive situation can start out by devoting all their resources to copying themselves, and so avoid getting outcompeted, and then only start devoting resources to promoting human welfare once the competition has cooled down. I think this kind of dynamic will end up burning some of the cosmic commons, but maybe not that much. I take the situation to be similar to the one that Carl Shulman describes in this blogpost.

Okay, now moving on to premise 1. I think you might be underrating group selection. Although (by definition) evolutionarily selfish AIs outcompete evolutionarily altruistic AIs with whom they interact, groups of evolutionarily altruistic AIs can outcompete groups of evolutionarily selfish AIs. (This is a good book on evolution and altruism, and there’s a nice summary of the book here.)

What’s key for group selection is that evolutionary altruists are able to (at least semi-reliably) identify other evolutionary altruists and so exclude evolutionary egoists from their interactions. And I think, in this respect, group selection might be more of a force in AI evolution than in biological evolution. That’s because (it seems plausible to me) that AIs will be able to examine each other’s source code and so determine with high accuracy whether other AIs are evolutionary altruists or evolutionary egoists. That would help evolutionarily altruistic AIs identify each other and form groups that exclude evolutionary egoists. These groups would likely outcompete groups of evolutionary egoists.

Here’s another point in favour of group selection predominating amongst advanced AIs. As you note in the paper, groups consisting wholly of altruists are not evolutionarily stable, because any egoist who infiltrates the group can take advantage of the altruists and thereby achieve high fitness. In the biological case, there are two ways an egoist might find themselves in a group of altruists: (1) they can fake altruism in order to get accepted into the group, or (2) they can be born into a group of altruists as the child of two altruists, and (by a random genetic mutation) can be born as an egoist.

We already saw above that (1) seems less likely in the case of AIs who can examine each other’s source code. I think (2) is unlikely as well. For reasons of goal-content integrity, AIs will have reason to make sure that any subagents they create share their goals. And so it seems unlikely that evolutionarily altruistic AIs will create evolutionarily egoistic AIs as subagents.

I wouldn't call a small policy like that 'democratically unacceptable' either. I guess the key thing is whether a policy goes significantly beyond citizens' willingness to pay not only by a large factor but also by a large absolute value. It seems likely to be the latter kinds of policies that couldn't be adopted and maintained by a democratic government, in which case it's those policies that qualify as democratically unacceptable on our definition.

suggests that we are not too far apart.

Yes, I think so!

I guess this shows that the case won't get through with the conservative rounding off that you applied here, so future developments of this CBA would want to go straight for the more precise approximations in order to secure a higher evaluation.

And thanks again for making this point (and to weeatquince as well). I've written a new paragraph emphasising a more reasonable, less conservative estimate of benefit-cost ratios. I expect it'll probably go in the final draft, and I'll edit the post here to include it as well (just waiting on Carl's approval).

 Re the possibility of international agreements, I agree that they can make it easier to meet various CBA thresholds, but I also note that they are notoriously hard to achieve, even when in the interests of both parties. That doesn't mean that we shouldn't try, but if the CBA case relies on them then the claim that one doesn't need to go beyond it (or beyond CBA-plus-AWTP) becomes weaker.

I think this is right (and I must admit that I don't know that much about the mechanics and success-rates of international agreements) but one cause for optimism here is Cass Sunstein's view about why the Montreal Protocol was such a success (see Chapter 2): cost-benefit analysis suggested that it would be in the US's interest to implement unilaterally and that the benefit-cost ratio would be even more favourable if other countries signed on as well. In that respect, the Montreal Protocol seems akin to prospective international agreements to share the cost of GCR-reducing interventions.

Thanks for this! All extremely helpful info.

Naively a benefit cost ratio of >1 to 1 suggests that a project is worth funding. However given the overhead costs of government policy, to governments propensity to make even cost effective projects go wrong and public preferences for money in hand it may be more appropriate to apply a higher bar for cost-effective government spending. I remember I used to have a 3 to 1 ratio, perhaps picked up when I worked in Government although I cannot find a source for this now.

This is good to know. Our BCR of 1.6 is based on very conservative assumptions. We were basically seeing how conservative we could go while still getting a BCR of over 1. I think Carl and I agree that, on more reasonable estimates, the BCR of the suite is over 5 and maybe even over 10 (certainly I think that's the case for some of the interventions within the suite). If, as you say, many people in government are looking for interventions with BCRs significantly higher than 1, then I think we should place more emphasis on our less conservative estimates going forward.

I made a separate estimate that I thought I would share. It was a bit more optimistic than this. It suggested that the benefit costs ratios (BCR) for disaster prevention are that, on the margin, additional spending on disaster preparedness to be in the region of 10 to 1, maybe a bit below that. I copy my sources into an annex section below.

Thanks very much for this! I might try to get some of these references into the final paper.

I am also becoming a bit more sceptical of the value of this kind of general longtermist work when put in comparison to work focusing on known risks. Based on my analysis to date I believe some of the more specific policy change ideas about preventing dangerous research or developing new technology to tackle pandemics (or AI regulation) to be a bit more tractable and a bit higher benefit to cost than then this more general work to increase spending on risks. 

This is really good to know as well.

Though I agree that refuges would not pass a CBA, I don't think they are an example of something that would be extreme cost to those alive today-I suspect significant value could be obtained with $1 billion.

I think this is right. Our claim is that a strong longtermist policy as a whole would place extreme burdens on the present generation. We expect that a strong longtermist policy would call for particularly extensive refuges (and lots of them) as well as the other things that we mention in that paragraph.

We also focus on the risk of global catastrophes, which we define as events that kill at least 5 billion people.

This is higher than other thresholds for GCR I've seen - can you explain why?

We use that threshold because we think that focusing on that threshold by itself makes the benefit-cost ratio come out greater than 1. I’m not so sure that’s the case for the more common thresholds of killing at least 1 billion people or at least 10% of the population in order to qualify as a global catastrophe.

I'm pretty sure this includes effects on future generations, which you appear to be against for GCR mitigation. 

We're not opposed to including effects on future generations in cost-benefit calculations. We do the calculation that excludes benefits to future generations to show that, even if one totally ignores benefits to future generations, our suite of interventions still looks like it's worth funding.

Interestingly, energy efficiency rules calculate the benefits of saved SCC, but they are forbidden to actually take this information into account in deciding what efficiency level to choose at this point.

Oh interesting! Thanks.

It's probably too late, but I would mention the Global Catastrophic Risk Management Act that recently became law in the US. This provides hope that the US will do more on GCR.

And thanks very much for this! I think we will still be able to mention this in the published version.

Load more