# Tristan Cook

Research Analyst @ Center on Long-Term Risk
Working (0-5 years experience)
628London, UKJoined Jul 2020
tristancook.com

# Bio

I am a research analyst at the Center on Long-Term Risk.

I've worked on grabby aliens, the optimal spending schedule for AI risk funders,  and  evidential cooperation in large worlds.

Calendly

Anonymous feedback or messages

Website

# Posts 6

Sorted by New
1
· 2y ago · 1m read
32
· 3mo ago · 3m read
60
· 8mo ago · 5m read
106
· 1y ago · 63m read

# Sequences 1

Optimal spending for ai risk

# Topic Contributions3

Strong agreement that a global moratorium would be great.

I'm unsure if aiming for a global moratorium is the best thing to aim for rather than a slowing of the race-like behaviour -- maybe a relevant similar case is whether to aim directly for the abolition of factory farms or just incremental improvements in welfare standards.

This post from last year - What an actually pessimistic containment strategy looks like -  has some good discussion on the topic of slowing down AGI research.

Thanks for the transcript and sharing this. The coverage seems pretty good, and the airplane crash analogy seems pretty helpful for communicating  - I expect to use it in the future!

I agree. This lines with models of optimal spending I worked on which allowed for a post-fire alarm "crunch time" in which one can spend a significant fraction of remaining capital.

I think "different timelines don't change the EV of different options very much" plus "personal fit considerations can change the EV of a PhD by a ton" does end up resulting in an argument for the PhD decision not depending much on timelines. I think that you're mostly disagreeing with the first claim, but I'm not entirely sure.

Yep, that's right that I'm disagreeing with the first claim.  I think one could argue the main claim either by:

1. Regardless of your timelines, you (person considering doing a PhD) shouldn't take it too much into consideration
2. I (advising you on how to think about whether to do a PhD) think timelines are such that you shouldn't take timelines too much into consideration

I think (1) is false, and think that (2) should be qualified by how one's advice would change depending on timelines. (You do briefly discuss (2), e.g. the SOTA comment).

To put my cards on the table, on the object level, I have relatively short  timelines and that fewer people should be doing PhDs on the margin. My highly speculative guess is that this post has the effect of marginally pushing more people towards doing PhDs (given the existing association of shorter timelines => shouldn't do a PhD).

I think you raise some good considerations but want to push back a little.

I agree with your arguments that
- we shouldn't use point estimates (of the median AGI date)
- we shouldn't fully defer to (say) Metaculus estimates.
- personal fit is important

But I don't think you've argued that "Whether you should do a PhD doesn't depend much on timelines."

Ideally as a community we can have a guess at the optimal number of people in the community that should do PhDs (factoring in their personal fit etc) vs other paths.

I don't think this has been done, but since most estimates of AGI timelines have decreased in the past few years it seems very plausible to me that the optimal allocation now has fewer people doing PhDs. This could maybe be framed as raising the 'personal fit bar' to doing a PhD.

I think my worry boils down to thinking that "don't factor in timelines too much" could be overly general and not get us closer to the optimal allocation.

Thanks for the post!

In this post, I'll argue that when counterfactual reasoning is applied the way Effective Altruist decisions and funding occurs in practice, there is a preventable anti-cooperative bias that is being created, and that this is making us as a movement less impactful than we could be.

One case I've previously thought about is that some naive forms of  patient philanthropy could be like this - trying to take credit for spending on the "best"  interventions.

I've polished a old draft and posted it as short-form with some discussion of this (in the When patient philanthropy is counterfactual section).

# Some takes on patient philanthropy

Epistemic status: I’ve done work suggesting that AI risk funders be spending at a higher rate, and I'm confident in this result. The other takes are less informed!

I discuss

• Whether I think we should be spending less now
• Useful definitions of patient philanthropy
• Being specific about empirical beliefs that push for more patience
• When patient philanthropy is counterfactual
• Opportunities for donation trading between patient and non-patient donors

### Whether I think we should be spending less now

In principle I think the effective giving community could be in a situation where we should marginally be saving/investing more than we currently do (being ‘patient’).

However, I don’t think we’re in such a situation and in fact believe the opposite. My main crux is AI timelines; if I thought that AGI was less likely than not to arrive this century, then I would almost certainly believe that the community should marginally be spending less now.

### Useful definitions of patient philanthropy

I think patient philanthropy could be thought of as saying one of:

1. The community is spending at the optimal rate  - let’s create a place to save/invest so ensure we don’t (mistakenly) overspend and keep our funds secure.
2. The community is spending above the optimal rate - let’s push for more savings on the margin, and create a place to save/invest and give later

I don’t think we should call (1) patient philanthropy. Large funders (e.g. Open Philanthropy) already do some form of (1) by just not spending all their capital all this year. Doing (1) is instrumentally useful for the community and is necessary in any case where the community is not spending all of its capital this year.

I like (2) a lot more. This definition is relative to the community’s current spending rate and could be intuitively ‘impatient’. Throughout, I’ll use ‘patient’ to refer to (2): thinking the community’s current spending rate is too high (and so we do better by saving more now and spending later).

As an aside, thinking that the most ‘influential’ time ahead is not equivalent to being patient. Non-patient funders can also think this but believe their last spending this year goes further than in any other year.

A potential third definition could be something like “patience is spending 0 to ~2% per year” but I don’t think it is useful to discuss.

### Differences in beliefs between funders driving patience

Of course, the large funders and the patient philanthropist may have different beliefs that lead them to disagree on the community’s optimal spending rate. If I believes one of the following, I’d like decrease my guess of the community’s optimal spending rate (and becoming more patient):

• Thinking that there are not good opportunities to spend lots on now (i.e. higher diminishing returns to spending)
• Thinking that TAI / AGI is further away.
• Thinking that the rate of non-AI global catastrophic risk (e.g. nuclear war, biorisk) is lower
• Thinking that there’ll be great spending opportunities in the run-up to AGI
• Thinking that capital will be useful post-AGI
• Thinking that the existing large funders’s capital is less secure or the large funders’ future effective giving less likely for other reasons

Since it seems likely that there are multiple points of disagreement leading to different spending rates, ‘‘patient philanthropy’ may be a useful term for the cluster of empirical beliefs that imply the community should be spending less. However, it seems better to be more specific about which particular beliefs are driving this the most.

For example “AI skeptical patient philanthropists” and “better-AI-opportunities-now patient philanthropists” may agree that the community’s current spending rate is too high, but disagree on the optimal (rate of) future spending.

Patient philanthropists can be considered as funders with a very high ‘bar’. That is, they will only spend down on opportunities better than  utils per and if none currently exist, they will wait. Non-patient philanthropists also operate similarly but with a lower bar . While the non-patient philanthropist has funds (and funds anything above utils per dollar, including the opportunities that the patient philanthropist would otherwise fund) the patient philanthropist spends nothing. The patient philanthropist reasons that the counterfactual value of funding something the non-patient philanthropist would fund is zero and so chooses to save. In this setup, the patient philanthropist is looking to fund and take credit for the ‘best’ opportunities and - while the large funder is around - the patient philanthropist is just funging with them. Once the large funder runs out of funds, the patient philanthropist’s funding is counterfactual.[1] If the large funder and patient philanthropist have differences in values or empirical beliefs, it is unsurprising they have different guesses of the optimal spending rate and 'bar'. However, this should not happen with value and belief-aligned funders and patient philanthropists; if the funder is acting 'rationally' and spending at the optimal rate, then (by definition) there are no type-2 patient patient philanthropists that have the same beliefs. ### Opportunities for donation trading between patient and non-patient small donors There are some opportunities for trade between patient philanthropists and non-patient philanthropists, similar to how people can bet on AI timelines. Let’s say Alice pledges to give/year from her income and thinks that the community should be spending more now. Let’s say that Bob thinks the community should be spending less and and saves $/year from his income in order to give it away later. There’s likely an agreement possible (dependent on many factors) where they both benefit. A simple setup could involve: • Bob, for the next years gives away their$/year to Alice’s choice of giving opportunity
• Alice, after  years, giving $/year to Bob’s preferred method of investing/saving or giving This example closely follows similar setups suggested for betting on AI timelines. 1. ^ Unless the amazing opportunity of utils/$ appears just after the large funder runs out of funds. Where ‘just after’ is the time that the large funder would have kept going with their existing spending strategy of funding everything about  utils/\$ by using the patient philanthropist’s funds.

DM = digital mind

Archived version of the post (with no comments at the time of the archive). The post is also available on the Sentience Institute blog

I think you are mistaken on how Gift Aid / payroll giving works in the UK (your footnote 4), it only has an effect once you are a higher rate or additional rate taxpayer. I wrote some examples up here. As a basic rate taxpayer you don't get any benefit - only the charity does.

Thanks for the link to your post!  I'm a bit confused about where I'm mistaken. I wanted to claim that:

(ignoring payroll giving or claiming money back from HMRC, as you discuss in yoir post) taking a salary cut (while at the 40% marginal tax rate)  is more efficient (at getting money to your employer) than receiving taxed income than donating it (with gift aid) to  your employer

Is this right?

My impression is that people within EA already defer too much in their donation choices and so should be spending more time thinking about how and where to give, what is being missed by Givewell/OP etc. Or defer some (large) proportion of their giving to EA causes but still have a small amount for personal choices.

Fair point. I think that because I'm somewhat more excited about one person doing a 100 hour investigation rather than 10 people doing 10 hour investigations and I would still push for people to enter small-medium sized a donor lotteries (which is arguably a form of deferral).

# How I think we should do anthropics

I think we should reason in terms of decisions  and not use anthropic updates or probabilities at all. This is what is argued for in Armstrong's Anthropic Decision Theory, which itself is a form of updateless decision theory.

In my mind, this resolves a lot of confusion around anthropic problems when they're reframed as decision problems.

I'd pick, in this order,

1. Minimal reference class SSA
2. SIA
3. Non-minimal reference class SSA

I choose this ordering because both minimal reference class SSA and SIA can give the 'best' decisions (ex-ante optimal ones) in anthropic problems,[1] when paired with the right decision theory.

Minimal reference class SSA needs pairing with an evidential-like decision theory, or one that supposes you are making choices for all your copies. SIA needs pairing with a causal-like decision theory (or one that does not suppose your actions give evidence for, or directly control, the actions of your copies). Since I prefer the former set of decision theories, I prefer minimal reference class SSA to SIA.

Non-minimal reference class SSA, meanwhile, cannot be paired with any (standard) decision theory to get ex-ante optimal decisions in anthropic problems.

For more on this, I highly recommend Oesterheld & Conitzer's Can de se choice be ex ante reasonable in games of imperfect recall?

1. ^

For example, the sleeping beauty problem or the absent-minded driver problem