Hide table of contents

Pronounced: Basis Fund. Alt: Point-O-One-Percent Fund

Summary

The basic idea is for a new funding agency, or a subproject within existing funding agencies, to solicit proposals for research or startup ventures that, assuming everything goes well, can reduce existential risk by > 0.01 percentage points (pp) at a price point of between 100M-1B/0.01% of absolute xrisk reduced (including both financial costs and reasonable models of human capital). 

EDIT 2022/09/21: The 100M-1B estimates are relatively off-the-cuff and very not robust, I think there are good arguments to go higher or lower. I think the numbers aren't crazy, partially because others independently come to similar numbers (but some people I respect have different numbers). I don't think it's crazy to make decisions/defer roughly based on these numbers given limited time and attention. However, I'm worried about having too much secondary literature/large decisions based on my numbers, since it will likely result in information cascades. My current tentative guess as of 2022/09/21 is that there are more reasons to go higher (think averting x-risk is more expensive) than lower. However, overspending on marginal interventions is more -EV than underspending, which pushes us to bias towards conservatism.

Also, six months after publication and ~10 months after the idea's inception, I do not currently have any real plans to actually implement this new fund, though if someone reading this is excited about it and think they might have a relevant background, please feel free to ping.

The two main advantages of having a specific fund or subfund that focuses on this are:

  1. Memetic call to increase ambition and quantification: Having an entire fund focused on projects that can reduce existential risk by >0.01% can help encourage people to look really hard and actively for potentially really amazing projects, rather than “settle” for mediocre or unscalable ones.
  2. Assessor/grantmaker expertise: Having a fund that focuses on specific quantitative models of this form helps to develop individual and institutional expertise in evaluating the quality of grants a) quantitatively and b) at a specific (high) tier of ambition.

More specifically, we give initial seed funding in the between 30k - 10M region for projects that

  • have within-model validity,
  • where we don’t detect obvious flaws, and
  • seem healthily devoid of very large (EV) downside risks.

The idea is that initially successful projects will have an increasingly strong case for impact later, and then can move on to larger funders like Open Phil, Future Fund, SFF, Longview, etc.

We don’t care as much about naive counterfactual impact at the initial funding stage, as (for example) projects that can save other people’s time/$s to do other important things are also valuable. We will preferentially fund projects that other people either a) are not doing or b) not doing well.

The fund will primarily fund areas with more strategic clarity than AI safety or AI governance, as the “reduce x-risk by 0.01 percentage points” framing may be less valuable for those ventures. We may also fund projects dedicated towards increasing strategic clarity, especially if we think the arguments for their research directions feel qualitatively and quantitatively compelling.

What does this funding source do that existing LT sources don’t?

  1. Point people heavily at a clear target.
    1. I feel like existing ventures I see, including both public EAIF/LTFF grants and the small number of grants/research questions that come across my desk at RP to (implicitly) evaluate, rarely have clear stories and never have numbers for “assuming this goes extremely well, we’d clearly reduce xrisk by X% through ABC”
    2. Having a clear target to aim for is often helpful
  2. Force quantification
    1. Quantification is often good, see (ironically) qualitative arguments here.
  3. Provide both a clear source of funding and specialization/diversification of grantmakers.
    1. Grantmakers here can focus on assessing quantitative claims of a certain ambition bar for seed funding, while leaving qualitative claims, non-seed funding, and grant claims of much lower ambition to other grantmakers.

How will we determine if something actually reduces x-risk by 0.01pp if everything goes well?

We start with high-level estimates of xrisk from different sources, such as as Michael Aird’s Database of existential risk estimates or Ord’s book, as well as someone (maybe Linch? maybe a reader?) who’s good at evaluating quantitative models to look at the within-model validity of each promising grant. A process such as the one outlined here could also be used when attempting quantification.

As the fund progresses and matures, we may have increasingly more accurate and precise high-level quantitative estimates for levels of xrisk from each source (through a network of advisors, in-house research, etc), as well as stronger and stronger know-how and institutional capacity to rapidly and accurately assess/evaluate theories of change. This may involve working with QURI or other quantification groups, hiring superforecaster teams, having in-house xrisk researchers, etc.

As the fund progresses and we build strong inside views of what’s needed to reduce x-risk, we may also generate increasingly many requests for proposals for specific projects.

Will this reduce people’s ambitions too much?

Having a >0.01pp xrisk reduction goal might be bad if people would otherwise work on having an uncertain chance of solving xrisk a lot. But I think there mostly isn’t enough xrisk in the world for this to be true, other than within AI.

But it might not be too hard to avoid poaching too much human capital from AI efforts, e.g. by making this less high-status than AI alignment, mandating pay to be lower than top AI Safety efforts, etc. 

I do think there are projects with 1-2(?) orders of magnitude higher cost-effectiveness than 0.01pp of in biorisk reduction, but not higher. And I think aiming for >0.01pp does not preclude hitting 0.1pp. Note that 0.01pp is a lower bound.

Will this make people overly ambitious? 

E.g., maybe the target is too lofty and important work that needs to be done but has no clear story for reducing xrisk by a basis point will be overlooked, or people will falter a lot trying to do lofty goals.

I think this is probably fine as long as we pay people well, provide social safety nets, etc. Right now EA’s problem is insufficient ambition, or at least insufficiently targeted ambition, rather than too much ambition.

In addition, we may in practice want to consider projects with existential risk estimates in the microdoom (0.0001%) region, though we will of course very strongly preferentially fund projects with >0.01% existential risk reduction.

Won’t you run into issues with Goodharting/optimizer’s curse/bad modeling? 

In short, it naively seems like asking people to make a quantitative case for decreasing x-risk by a lot will result in fairly bad models. Like maybe we’d fund projects that collectively save like 10 Earths or something dumb like that. I agree with this concern but think it’s overstated because:

  1. the grantmaking agency will increasingly get good at judging models,
  2. it’s not actually that bad to overfund projects at the seed stage, since later grantmakers can then apply more judgment/discretion/skepticism at the point of scaling to tens or hundreds of millions of dollars, and
  3. my general intuition is that optimizer’s curse is what you currently see when you ask people to quantify their intuitive models, and (esp. in longtermism) we’d otherwise absolutely get the verbal equivalents to optimizer’s curse all the time, just not formally quantified enough so things are “not even wrong”

Next Steps

  1. People here evaluate this proposal and help decide whether this proposal is on-balance a good idea.
  2. I (Linch) to consider whether trying out a minimally viable version of this fund is worth doing, in consultation with advisors, commentators here, and other members of the EA community.
  3. I recruit part-time people needed for a simple, minimally viable version of this fund. Eg, a project manager, ops, and a few technical advisors.
  4. If we do think it’s worth doing, I make some processes and an application form.
  5. I launch the fund officially on the EA Forum!

Acknowledgements and caveats

Thanks to Adam Gleave and the many commentators on my EAForum question for discussions that led to this post. Thanks to Peter Wildeford, Michael Aird, Jonas Vollmer, Owen Cotton-Barret, Nuño Sempere, and Ozzie Gooen for feedback on earlier drafts. Thanks also to Sydney von Arx, Vaidehi Agarwalla, Thomas Kwa and others for verbal feedback. 

This post was inspired by some of my work and thinking at Rethink Priorities, but this is not a Rethink Priorities project. All opinions are my own, and do not represent any of my employers.

Comments23
Sorted by Click to highlight new comments since: Today at 6:07 AM

What does this funding source do that existing LT sources don’t?

Natural followup: why a new fund rather than convince an existing fund to use and emphasize the >0.0.1% xrisk reduction criterion?

I think there's a pretty smooth continuum between an entirely new fund and an RFP within an existing fund, particularly if you plan to borrow funders and operational support. 

I think I a) want the branding of an apparent "new fund" to help make more of a splash and to motivate people to try really hard to come up with ambitious longtermist projects, and b) to help skill up people within an org to do something pretty specific.

You also shave off downside risks a little if you aren't institutionally affiliated with existing orgs (but get advice in a way that decreases unilateralist-y bad stuff).

Speaking to the important points of the project's premise, as well as your judgement and experience, it seems good to list some existing longtermist projects and what you see is their pp improvement of x-risk. 

 

(Subtext/Meta comment: This request is much more difficult than it seems to the degree I think it's unreasonable. The subtext/consequent value is that I think it's hard to quantify anything at 1pp, much less 0.01pp, and it would be good to understand anyone's thinking about this.)

Thanks, this is a good challenge! The short response is that I don't have an effectiveness model on hand for any existing projects. A slightly longer response is that most of the work so far has been "meta," (including growing the size and power of the EA movement, but also all sorts of prioritization and roadmap research), except maybe in AI where we mostly just really lack strategic clarity to confidently say anything about whether  we are increasing or decreasing x-risks. I think those things are harder to map out the effectiveness numbers of, compared to future engineering "megaprojects" where we are concretely targeting a particular existential risk channel and arguing that we can block a percentage of it if our projects scale well.

But I think the best way to answer the spirit of your question is to consider large-scale scientific and engineering projects of the future* and do rough BOTECs on how much existential risk they can avert. 

I think this might be a good/even necessary template before the .01% fund can become a reality. If such BOTECs are interesting to you and/or other future grantseekers, I'm happy to do so, or commission other people to do so.

*including both projects with a fair amount of active EA work on (like vaccine platform development, certain forecasting projects, and metagenomic sequencing) and projects with very little current EA work on (like civilizational refuges).

The subtext/consequent value is that I think it's hard to quantify anything at 1pp, much less 0.01pp

I think it's not crazy because if you (e.g.) think of a specific x-risk as 3 pp worlds doomed, then a project that could shave of ~5% of that would be ~15 basis points, speaking somewhat loosely.

FYI just from reading the title, I initially thought the .01% in the name was referring to the richest .01% of the world, like maybe this was Founders Pledge but only for the ultra-rich.

This is a good point - I also don't like the current name, I prefer the name "Basis Fund" mainly because it seems suboptimal to have people calling it the "Point O-One Percent Fund" if they don't realise it's referring to a basis points. 

1. With regards to comparative advantages vs current grantmakers, I've been recently been thinking that it doesn't seem *that* hard to beat them on dedication per dollar spent. Sure, someone like Luke Muehlhauser probably has better judgment and deeper expertise than me, but I can research something specific for much longer.

2. This kind of proposal could also take some degree of active grantmaking. I'm thinking mostly of soliciting proposals through prizes and then finding people for the most promising proposals, but there could be other approaches.

3. I've been thinking about how ALLFED relates to this. In some ways, it's similar to what one would be aiming for, in terms of xrisk reduction per dollar, But they also did their own cost-effectiveness estimates in house, which led to some funders not liking them as much because their estimates were somewhat exaggerated/biased upwards, which is probably hard to avoid when one is very passionate about something.

4. I'd be particularly excited to see the part when one develops in-house x-risk reduction estimation capabilities. It seems like it could be useful for essentially any step in the funding pipeline, not just the beginning, though. A pretty natural thought is to increase the depth and expense of the quantification as the funding amounts go up, so it's not clear that one should frontload the quantification at the beginning.

5. I thought that the "pointing people at a pretty clear target" and the "you are still getting the optimizer's curse when evaluating stuff verbally, but you just don't notice as much" were strong points.

Interesting.

As I once mentioned here, computing basis points is much more compatible with a diverse range of views if we think about survival probability where a 100% increase means probability of survival doubled rather than survival probability where a 100% increase means one universe saved.

To illustrate the issue with the latter, suppose a project can decrease some particular x-risk from 1% to 0%, and this x-risk is uncorrelated with others. If there are no other x-risks, this project brings total x-risk from 1% to 0%, so we gain 100 basis points. If other x-risks are 99% likely, this project brings total x-risk from 99.01% to 99%, so we gain 1 basis point. Thus whether a project passes the latter threshold depends on the likelihood of other x-risks. (But the project increases the former probability by 1% in both cases.)

Yeah this is a really interesting challenge. I haven't formed an opinion about whether I prefer reduction probability vs absolute basis points and chose the latter partially for simplicity and partially for ease of comparability with places like Open Phil. I can easily imagine this being a fairly important high-level point and your arguments do seem reasonable.

>If other x-risks are 99% likely, this project brings total x-risk from 99.01% to 99%

Shouldn't this be "from 100% to 99%"?

By "this x-risk is uncorrelated with others" I meant that the risks are independent and so "from 99.01% to 99%" is correct. Maybe that could be clearer; let me know if you have a suggestion to rephrase...

I'm confused as to why you use a change of 1pp (from 1% to 0%) in the no-other-x-risks case, but a change of 0.01pp (from 99.01% to 99%) in the other-x-risks case.

Suppose for illustration that there is a 1% chance of bio-x-risk in (the single year) 2030 and a 99% chance of AI-x-risk in 2040 (assuming that we survive past 2030). Then we survive both risks with probability (1-.01)*(1-.99) = .0099. Eliminating the bio-x-risk, we survive with probability 1-.99 = .01.

But if there is no AI risk, eliminating biorisk changes our survival probability from .99 to 1.

I see, thanks!

So in the two-risk case, P(die) = P(die from bio OR die from AI) = P(bio) + P(AI) - P(bio AND AI) = (using independence) 0.01 + 0.99 - 0.01*0.99 = 1 - 0.0099 = 0.9901. 
If P(die from bio)=0, then P(die) = P(die from AI) = 0.99.

I really like this and think it’s very promising!

0.01% risk at 1 billion would mean that $100 billion would reduce risk by 1%. That's probably more money than that available to all of EA at the moment. I guess that wouldn't seem like a good buy to me.

$100bn to reduce the risk by 100 basis points seems like  a good deal to me, if you think you can model the risk like that. If I've understood the correctly, that would be the equivalent price of $10tn to avoid a certain extinction; which is less than 20% of global GDP. Bargain! 

Some quick counterpoints:

  1. Average benefit =/= marginal benefit. So if the first 10B buys really good stuff, the marginal 1B might be much much worse.
  2. We'll probably have more money in the future
  3. In worlds where we don't have access to more money, we can choose not to scale up projects
    1. I'd be especially excited if people submit projects to us of a fairly decent range of cost-effectiveness, so we can fund a bunch of things to start with, and have the option of funding everything later if we have the money for it.
  4. There's no fundamental law that says most x-risk is reducible, there might be a fair number of worlds where we're either doomed or saved by default.
  5. But I'm also interested in modeling out the x-risk reduction potential of various projects, and dynamically adjusting the bar.

The "By '0.01%', do you mean '0.01pp'" thing might also loom here. 0.01pp reduction is a much better buy than 0.01% reduction!

can reduce existential risk by > 0.01 percentage points (pp) at a price point of between 100M-1B/0.01% of absolute xrisk reduced

By "0.01%", do you mean "0.01pp"?

More from Linch
Curated and popular this week
Relevant opportunities