Tristan Cook

Research Analyst @ Center on Long-Term Risk
Working (0-5 years experience)
476London, UKJoined Jul 2020
tristancook.com

Bio

I am a research analyst at the Center on Long-Term Risk.

I previously studied maths at the University of Cambridge and University of Warwick.

Send me anonymous feedback or messages here https://www.admonymous.co/tristancook

Comments
51

Thanks!

And thanks for the suggestion, I've created a version of the model using a Monte Carlo simulation here :-)

This is a short follow up to my post on the optimal timing of spending on AGI safety work which, given exact values for the future real interest, diminishing returns and other factors, calculated the optimal spending schedule for AI risk interventions.

This has also been added to the post’s appendix and assumes some familiarity with the post.

Here I consider the most robust spending policies and supposes uncertainty over nearly all parameters in the model[1] Inputs that are not considered include: historic spending on research and influence, rather than finding the optimal solutions based on point estimates and again find that the community’s current spending rate on AI risk interventions is too low

My distributions over the the model parameters imply that

  • Of all fixed spending schedules (i.e. to spend X% of your capital per year[2]), the best strategy is to spend 4-6% per year.
  • Of all simple spending schedules that consider two regimes: now until 2030, 2030 onwards, the best strategy is  to spend ~8% per year until 2030, and ~6% afterwards.

I recommend entering your own distributions for the parameters in the Python notebook here[3]. Further, these preliminary results use few samples: more reliable results would be obtained with more samples (and more computing time).

I allow for post-fire-alarm spending (i.e., we are certain AGI is soon and so can spend some fraction of our capital). Without this feature, the optimal schedules would likely recommend a greater spending rate.


Caption: Fixed spending rate. See here for the distributions of utility for each spending rate.

Caption: Simple  - two regime -  spending rate


Caption: The results from a simple optimiser[4], when allowing for four spending regimes: 2022-2027, 2027-2032, 2032-2037 and 2037 onwards. This result should not be taken too seriously: more samples should be used, the optimiser runs for a greater number of steps and more intervals used. As with other results, this is contingent on the distributions of parameters.

 

Some notes

  • The system of equations - describing how a funder’s spending on AI risk interventions change the probability of AGI going well - are unchanged from the main model in the post.
  • This version of the model randomly generates the real interest, based on user inputs. So, for example, one’s capital can go down.

Caption: An example real interest function , cherry picked to show how our capital can go down significantly. See here for 100 unbiased samples of .

Caption: Example probability-of-success functions. The filled circle indicates the current preparedness and probability of success.

 

Caption: Example competition functions. They all pass through (2022, 1) since the competition function is the relative cost of one unit of influence compared to the current cost. 

 

This short extension started due to a conversation with David Field and comment from Vasco Grilo; I’m grateful to both for the suggestion.
 

 

  1. ^

     Inputs that are not considered include: historic spending on research and influence, the rate at which the real interest rate changes, the post-fire alarm returns are considered to be the same as the pre-fire alarm returns.

  2. ^

    And supposing a 50:50 split between spending on research and influence

  3. ^

     This notebook is  less user-friendly than the notebook used in the main optimal spending result (though not un user friendly)  - let me know if improvements to the notebook would be useful for you.

  4. ^

    The intermediate steps of the optimiser are here.

Previously the benefactor has been Carl Shulman (and I'd guess he is again, but this is pure speculation). From 2019-2020 donor lottery page:

Carl Shulman will provide backstop funding for the lotteries from his discretionary funds held at the Centre for Effective Altruism.

The funds mentioned are likely these $5m from March 2018:

The Open Philanthropy Project awarded a grant of $5 million to the Centre for Effective Altruism USA (CEA) to create and seed a new discretionary fund that will be administered by Carl Shulman

Answer by Tristan CookNov 24, 202214
🙌1
❤️2

I'll be giving to the EA Funds donor lottery (hoping it's announced soon :-D )

This is great to hear! I'm personally more excited by quality-of-life improvement interventions rather than saving lives so really grateful for this work.

Echoing kokotajlod's question for GiveWell's recommendations, do you have a sense of whether your recommendations change with a very high discount rate (e.g. 10%)? Looking at the graph of GiveDirectly vs StrongMinds it looks like the vast majority of benefits are in the first ~4 years

Minor note: the link at the top of the page is broken (I think the 11/23 in the URL needs to be changed to 11/24)

When LessWrong posts are crossposted to the EA Forum, there is a link in EA Forum comments section:

This link just goes to the top of the LessWrong version of the post and not to the comments. I think either the text should be changed or the link go to the comments section.

In this recent post from Oscar Delaney they got the following result (sadly doesn't go up to 10%, and in the linked spreadsheet the numbers seem hardcoded)

cost-effectiveness vs discount rate

Top three are Hellen Keller International (0.122), Sightsavers (0.095), AMF (0.062) 

(minor point that might help other confused people)

I had to google CMO  (which I found to mean Chief Marketing Officer) and also thought that BOAS might be an acronym - but found on your website

BOAS means good in Portuguese, clearly explaining what we do in only four letters! 

Increasing/decreasing one's AGI timelines decrease/increase the importance [1] of non-AGI existential risks because there is more/less time for them to occur[2].

Further, as time passes and we get closer to AGI, the importance of non-AI x-risk decreases relative to AI x-risk. This is a particular case of the above claim.

  1. ^

    but not necessarily tractability & neglectedness

  2. ^

    If we think that nuclear/bio/climate/other work becomes irrelevant post-AGI, which seems very plausible to me

These seem neat! I'd recommend posting them to the EA Forum - maybe just as a shortform - as well as on your website so people can discuss the thoughts you've added (or maybe even posting the thoughts on your shortform with a link to your summary).

For a while I ran a podcast discussion meeting at my local group and I think summaries like this would have been super useful to send to people who didn't want to / have time to listen. As a bonus - though maybe too much effort - would be generating discussion prompts based on the episode. 

Load More