Future Fund @ FTX Foundation
517Joined Sep 2019


Ah sweet, thank you! Didn't know this existed, glad to see it and just used it :)

Ah sweet, thank you! Didn't know this existed, glad to see it and just used it :)

Would it be possible for the EA forum to add footnote functionality? Thanks!

That's really cool to hear! Excited about your work!

This is a blog post and we meant to reference a month from when we published the blog post. Sorry for the confusion!

Thanks for your comment Odin! At this point, we've finished considering regranting expressions of interest and have invited the regrantors for the initial test.

We’re planning to invite additional regrantors by the end of this month or so. We are evaluating regrantor expressions of interest/referrals for regrantors on a rolling basis, so please send these in as soon as possible.

You are welcome to apply now!

Regrantors are able to make grants to people they know (in fact, having a diverse network is part of what makes for an effective regrantor); they just have to disclose if there's a conflict of interest, and we may reject a grant if we don't feel comfortable with it on those grounds. 

We don't currently have a network for regrantors that is open for external people to join.

Thanks! We are not planning to publish the list of regrantors for now.

Hi Ben, thanks for your kind words, and so sorry for the delayed response. Thanks for your questions!

  1. Yes, this could definitely be the case. In terms of what the most effective intervention is, I don’t know. I agree that more work on this would be beneficial. One important consideration would be what intervention has the potential to raise the level of safety in the long run. Safety spending might only lead to a transitory increase in safety, or it could enable R&D that improves improves the level of safety in the long run. In the model, even slightly faster growth for a year means people are richer going forward forever, which in turn means people are willing to spend more on safety forever.

  2. At least in terms of thinking about the impact of faster/slower growth, it seemed like the eta > beta case was the one we should focus on as you say (and this is what I do in the paper). When eta < beta, growth was unambiguously good; when eta >> beta, existential catastrophe was inevitable.

  3. In terms of expected number of lives, it seems like the worlds in which humanity survives for a very long time are dramatically more valuable than any world in which existential catastrophe is inevitable. Nevertheless, I want to think more about potential cases where existential catastrophe might be inevitable, but there could still be a decently long future ahead. In particular, if we think humanity’s “growth mode” might change at some stage in the future, the relevant consideration might be the probability of reaching that stage, which could change the conclusions.

Load More