Researcher @ Future of Humanity Institute / Longview Philanthropy
Working (0-5 years experience)


Head of Research @ Longview Media

Also Research scholar @ FHI and assistant to Toby Ord. Philosophy student before that.

I do a podcast about EA called Hear This Idea.


Answer by finmFeb 25, 202393

I think this is a good and important question. I also agree that humanity's predicament in 500 years is wildly unpredictable.

But there are some considerations that can guide our guess:

  • Almost everyone wants to improve their own lives; few people want to make their own lives worse for the sake of it
  • Some people want to improve the lives of others for the sake of it; few people want to harm others for the sake of it
  • Technological progress tends to enable people to get more of what they want; in this case the things that improve their lives
  • If humans are still around in 500 years, we should expect them to be more technologically advanced — since it seems easier to learn new capabilities than to entirely forget old ones

If you begin totally unsure whether the future is good or bad in expectation, then considerations like these might break the symmetry (while remaining entirely open to the possibility that the future is bad).

This post might also be useful; it recomplicates things by giving some considerations on the other side.

Looking forward to reading this. In the meantime, I notice that this post hasn't been linked and seems likely to be relevant:

Coherence arguments do not entail goal-directed behavior by Rohin Shah

Answer by finmDec 19, 202240

I'd be pretty interested in an EA instance. If it were to happen then I guess it should happen soon, since it looks like a significant fraction of new accounts will be created in the next few weeks. Does anyone have expertise with this? I'd probably be able to provide some support in setting it up, but don't currently have the time to lead on doing this.

Came here to share this also! What a great story.

Answer by finmNov 29, 20222

A list I'm considering for end-of-year donations, in no special order:

I'm also very interested in the best ways to help people affected by recent events, especially ways which are more scalable / accessible than supporting personal connections.

Sorry if I missed this in other comments, but one question I have is if there are ways for small donors to support projects or individuals in the short term who have been thrown into uncertainty by the FTX collapse (such as people who were planning on the assumption that they would be receiving a regrant). I suppose it would be possible to donate to Nonlinear's emergency funding pot, or just to something like the EAIF / LTFF / SFF.

But I'm imagining that a major bottleneck on supporting these affected projects is just having capacity  to evaluate them all. So I wonder about some kind of initiative where affected projects can choose to put some details on a public register/spreadsheet (e.g. a description of the project, how they've been affected, what amount of funding they're looking for, contact details). Then small donors can look through the register and evaluate projects which fit their areas of interest / experience, and reach out to them individually. It could be a living spreadsheet where entries are updated if their plans change or they receive funding. And maybe there could be some way for donors to coordinate around funding particular projects that they individually each donor couldn't afford to fund, and which wouldn't run without some threshold amount. E.g. donors themselves could flag that they'd consider pitching in on some project if others were also interested.

A more sophisticated version of this could involve small donors putting donations into some kind of escrow managed by a trusted party that donates on people's behalf, and that trusted party shares donors on information about projects affected by FTX. That would help maintain some privacy / anonymity if some projects would prefer that, but at administrative cost. I'd guess this idea is too much work given the time-sensitivity of everything.

An 80-20 version is just to set up a form similar to Nonlinear's, but which feeds into a database which everyone can see, for projects happy to publicly share that they are seeking shortish-term funding to stay afloat / make good on their plans. Then small donors can reach out at their discretion. If this worked, then it might be a way to help 'funge' not just the money but also the time of grant evaluators at grantmaking orgs (and similar) which is spent evaluating small projects. It could also be a chance to support projects that you feel especially strongly about (and suspect that major grant evaluators won't share your level of interest).

I'm not sure how to feel about this idea overall. In particular, I feel misgivings about the public and uncoordinated nature of the whole thing, and also about the fact that typically it's a better division of labour for small donors to follow the recommendations of experienced grant investigators/evaluators. Decisions about who to fund, especially in times like these, are often very difficult and sensitive, and I worry about weird dynamics if they're made public.

Curious about people's thoughts, and I'd be happy to make this a shortform or post in the effective giving sub-forum if that seems useful.

Thanks for writing this! I'm inclined to agree with a lot of it.

I am cautious about over-updating on the importance of earning to give. Naively speaking, (longtermist) EA's NPV has crashed by ~50% (maybe more since Open Phil's investments went down), so (very crudely, assuming log returns to the overall portfolio) earning to give is looking roughly twice as valuable in money terms, maybe more. How many people are in the threshold where this flips the decision on whether ETG is the right move for them? My guess is actually not a ton, especially since I think the income where ETG makes sense is still pretty high (maybe more like $500k than $100k — though that's a super rough guess).

That said, there may be there are other reasons EA has been underrating (and continues to underrate) ETG, like the benefits of having a diversity of donors. Especially when supporting more public-facing or policy-oriented projects, this really does just seem like a big deal. A rough way of modeling this is that the legitimacy / diversity of a source of funding can act like a multiplier on the amount of money, where funding pooled from many small donors often does best. The Longtermism Fund is a cool example of this imo.

Another thing that has changed since the days when ETG was a much more widely applicable recommendation is that fundraising might be more feasible, because there are more impressive people / projects / track records to point to. So the potential audience of HNWIs interested in effective giving has plausibly grown quite a bit.

I only skimmed this really quickly, so sorry if these points are redundant:

  •  Matheny (2006) is relevant here. He finds ~ $2.50 per expected life-year saved, which is obviously much more optimistic than your estimate (iirc because he's less conservative about accounting for human extinction).
  • In case it's relevant/useful for readers, here's a more qualitative post about risks from asteroids.
  • In general I'm pretty wary about direct comparisons to GiveWell, because very often these favourable comparisons compare apples to oranges, even in subtle ways. In particular, it might be worth looking at how GiveWell discount along different lines (especially time), and seeing what happens if you use the same assumptions.

Anyway, thanks for writing this! It is surprising that the stereotype of "popularly salient catastrophic risk that in fact seems thousands of times less significant than other catastrophic risk" still looks worth trying to mitigate.

Thank you Noah, I added a part of this to the main post.

In no particular order. I'll add to this if I think of extra books.

Precursors to thinking about existential risks and GCRs

Nuclear risk


AI safety


Space / big-picture thinking


  1. ^

    This covers both nuclear security and biosecurity topics.

  2. ^

    Especially Chapter 8: 'The Environment: Where Does Prudence Lie?', which contains some remarkable precursors to metaphors and arguments in The Precipice etc.

Load More