Hide table of contents

Tl;dr: As part of a Rethink Priorities work trial, I attempted to assess what the most promising approach to conducting cause prioritization within anti-aging will be.

This article was originally written as a work trial for Rethink Priorities. It was originally completed in 4.5 hours, and due to time constraints, I was unable to check the writing or a lot of my claims carefully. I thought it might be helpful for junior EAs to see what an unpolished work trial looks like, so I decided to publish mostly as-is. 

The question was roughly “discuss how you would go about prioritizing prioritization research within a topic of your choice.”

After receiving some light external feedback, I did end up fixing a few typos and places with obvious miscommunications. I decided in the end to not fix things based on comments/critique from reviewers that I would classify as identifying “conceptual errors.” Instead, sometimes I chose to highlight them by adding new paragraphs in 

[square quotes]

Hopefully my mistakes can be useful teaching moments for others! In the spirit of Cunningham’s Law, I’m vaguely optimistic that somebody reading this will think “wow this guy’s such a moron” and proceed to write a much better version as a result. 

You can see the original, entirely unedited, version here. 

Prompt Excerpt

EAs conducting cause prioritization research might focus on fundamental research (e.g, crucial considerations work) or intervention research (e.g, quantifying and comparing the cost-effectiveness of different interventions). With reference to one particular cause area of your choice, and given where EA is at now and given current research allocations and spending with respect to that cause area, to which of these two broad buckets do you think the next $1M of EA research money in that chosen cause area should be allocated on the margin if we could only allocate to one? (The idea is to analyze how organizations and researchers should spend additional time and money between these buckets, rather than what the overall current distribution of resources are.)

Note that this piece is meant to be evaluative, not argumentative or persuasive, and you should try to give honest and fully considered views (though don’t balance just for the sake of balance).”

Epistemic status (as of 2020/08/26): Moderately certain. I place 70% confidence that I will still stand by these claims if I were to give them ~20 more hours of additional thought, and ~60% confidence that the core claims here will still stand the test of time (will still seem reasonable in 5 years).

Executive Summary 

Anti-aging research, if successful, is likely to be a big deal for a wide range of cause areas EAs are interested in.

How should EAs prioritize how to think about anti-aging? I briefly consider 5 classes of ways to do cause prioritization, and tentatively arrive at 

Fundamental research on how to think about the effects of anti-aging

As the primary way to spend the next marginal $1 million of research effort on.


There has been recent interest in EA circles about the importance of anti-aging as a potential new cause area, particularly in its connection to long-termism. In this article, I will quickly discuss what are high-level considerations for cause prioritization work into anti-aging. 

In the article, I will be assuming an explicitly cosmopolitan, longtermist, and aggregative consequentialist framework (I am open to other value systems and may cover important considerations for other value systems when they come up, but I will not be hunting for those distinctions automatically). 

Why does this matter?

If anti-aging research is successful, it will plausibly have many large direct and indirect effects. Direct effects include substantially less human suffering from age-related diseases, less overall death from longer life expectancies (though it’s philosophically debatable how important this is), and presumably lower monetary spending on treating age-related diseases, causing net savings to society.

Indirect effects can include, but are not limited to the following:

Probable Benefits:

  • Those in charge will care more about a longer future (since they will live in it), with benefits akin to age-weighted voting.
  • The people in charge will likely be smarter. If older leaders are a given, we’d like them to not have cognitive decline.
  • A broadly smarter, wiser population to make good choices at important cruxes in the future (including but not limited to times of existential risks).
  • If some values are weakly “correct” in the sense that they are more likely to be arrived at through reflection, we may expect longevity (both health-span and lifespan enhancing effects) to cause more overall reflection and thus better values.

Probable Costs:

  • Potential for dictatorships to last longer (though note that the total length increased may be fairly small)
  • Lower willingness to explore when making explore/exploit tradeoffs - greater probability of institutional lock-in on bad/mediocre futures.
  • Inequality either between age groups, or greater lockin/persistence of existing inequalities


  • Likely faster overall intellectual progress since people can build on their own intellectual progress/learnings in the past and continue to be productive
  • Potential slowing of certain specific types of scientific progress (“Science advances one funeral at a time”)
  • Slowing of the viability of genetic engineered humans being a large percentage of our population
  • Large shifts in economic priorities
  • Potential for the slowing of moral progress/value drift (it’s unclear which philosophical considerations dominate for “good” moral progress vs “bad” value drift, see here for some discussion).
    • For example, speciesism might be “locked in”
    • On the other hand, slower societal moral change means our spiritual descendants are less likely to have alien morality without at least thinking about it a bunch first.
    • Gwern on Narrowing Circle

At the high level, however, even if the particular specific stories above are not impressive, I would argue that large increases in healthspan or lifespan are very likely (>90%) to have big effects on society and institutions. So our prior should very much be that “anti-aging is a big deal”, even if we’re not sure how.

Ways to divide up anti-aging cause prioritization

I currently think there are roughly 5 different ways to think about cause prioritization on anti-aging:

  1. Fundamental research on what to think about with regard to anti-aging
  2. Fundamental research on known questions in anti-aging
  3. Intervention research on how to do anti-aging
  4. Fundamental research on how to think about the effects of anti-aging
  5. Intervention-level research on how to think about mitigating/improving the effects of anti-aging

I lightly investigate each of them below.

[A commentator has pointed out that this division/typology  is rather conceptually confused, and it might be better divided in the following way:

  • Vague meta-stuff
  • Anti-aging research
  • Work to mitigate unintended negative consequences of anti-aging research (ideally without losing the positives)

I mostly agree with this critique.  


Fundamental research on what to think about with regard to anti-aging

This is work like my document, where people think about how to think about cause prioritization in anti-aging. Eg, does my typology above make sense? What other ways can you divide the intersection of EA considerations on anti-aging? Who should we talk to/what philosophies should we try to learn more about?


  1. This is an area I and many others are confused about, so it’s likely we’re not asking the right questions
  2. My ontology above is rushed, surely there’s a better one?


  1. Feedback loops for this type of meta-meta research are poor/nonexistent
  2. Unclear that this can absorb $1 million on the margin
  3. May look unimpressive to outsiders like future collaborators or funders

Fundamental research on known questions in anti-aging

This mostly comes in 2 categories: Philosophy, and biology 


We should also consider questions like “Is further investigation into anti-aging worth funding, compared to $$s spent in other cause areas?” Under which philosophical axiomatic considerations is this true?

I suspect issues here are similar to the above.


Here we try to do fundamental research on a deep level for how aging works. 


  1. Biology can always use more funding
  2. Fundamental research is often under-funded
  3. Good funding in this space can have okay feedback loops (eg sponsor solid research papers that get credit).


  1. Relatively crowded space
  2. Even if more money can help a little, unclear if this is funding constrained
  3. May be hard to improve things without knowing more experts

Intervention research on how to do anti-aging

This is trying to execute on the knowledge of aging that we already have.


Concretely, from a cause prioritization perspective, this is investigating different anti-aging groups and interventions (reading papers, meta-analyses, etc), and coming up with a value of information framework that ranks which interventions are most valuable to fund



  1. Has the best feedback loops in this space
  2. Potentially lots of learning value
  3. If anti-aging becomes a big deal, EAs can grab a greater share of the anti-aging resource pie if we are one of the first groups that has clear research on how to prioritize different anti-aging interventions
  4. Can in general be somewhat self-sustainable on non-EA funds if we do a good job


  1. I suspect this has low direct value (because the direct value of slightly improving anti-aging funding is low).
  2. Wisely allocating scientific resources is hard
  3. May be hard to get groups here (which are more likely to be gov’ts or large non-EA philanthropic donors) to listen to us, if they do we may have to “sell out” some of our core values.

Fundamental research on how to think about the effects of anti-aging

This is reasoning through the considerations that Will and Matthew raised in greater detail. What are the indirect effects of anti-aging? How can we assign probabilities and expected values to them? 

We can presumably borrow from existing social science (psychology, economics, etc) literature and repurpose them.

This will look more like building a world model than it looks like cause prioritization


  1. Clearly has the potential of being very important
  2. Can benefit from a lot of existing literature
  3. Can absorb EA funding
  4. Good work in this will hopefully be relatively easy to understand, and help us find funders/collaborators
  5. Upstream of a bunch of important considerations like whether we should fund direct anti-aging research, or fund interventions of effects of anti-aging


  1. Fairly bad feedback loops (can’t know if we’re right until it’s too late)
  2. We may not have the right ontology, etc, to reason through this clearly
  3. Unclear how to think about timelines -- may not be important to prepare now if anti-aging is 200 years+ out.
  4. May already be superseded by existing work?

[EDIT TO ADD 2021/1/12: Somebody asked me if I had specific existing work in mind; I don’t right now, and am a bit confused why Linch-Aug-2020 wrote point #4 specifically. My best guess is that he was worried an extensive report already existed then that he was unaware of, and was doing ass-covering for such an embarrassing possibility]

Intervention-level research on how to think about mitigating/improving the effects of anti-aging 


  1. Clearly has the potential of being very important
  2. A good target -- “Has your eyes on the prize”
    1. In a sense, this is the only thing that really matters
  3. A relatively clear story of impact


  1. Hard to know how to mitigate/improve effects of anti-aging if we don’t have a clear idea what the effects are
  2. Unclear how to think about probabilities
  3. Unclear how to think about timelines -- may not be important to prepare now if anti-aging is 200 years+ out.


An important fundamental question I did not cover here is the timing of aging relative to other futuristic technologies. For example, AI timelines. There’s a sense that if superhuman AGI is 30 years out, anti-aging (or any moderately long-termist cause that takes >30 years to pay out) is not very useful, if we expect normal human brains to no longer be at the helm of things.

There are similar worries for the timelines of large-scale genetic cognitive enhancement, digital emulations, human extinction, and civilizational collapse. (If any of the above happens before longevity is noticeably enhanced, any research, direct or indirect, on aging is unlikely to have much effect).

Tentative conclusions

Overall, I am most optimistic about Fundamental research on how to think about the effects of anti-aging, because it seems moderately important, moderately tractable, I have a somewhat clear inside view picture forwards, and I think good work in this is better at early field building than some of the other options. I am secondarily optimistic about Fundamental research on what to think about with regard to anti-aging, as I think my own internal picture (and likely that of many other EAs) of what the important considerations are with regard to anti-aging is quite hazy, and can be improved by more thinking, however I will not prioritize it over my first choice since I think thinking about the effects of anti-aging is a bit more object-level, and it is somewhat through relatively object-level considerations that you get to understand what meta-level considerations are important.

I am not too optimistic about doing research that helps accelerate anti-aging progress (Options #2 and #3) until we have a better picture of what the indirect effects of anti-aging will be. I am likewise not too optimistic about Intervention-level research on how to think about mitigating/improving the effects of anti-aging because it is again hard to think about the right interventions until we build a better world model of what anti-aging will look like.

Future work

  1. The explicit and implicit claims in the article should probably be fact-checked by someone who knows more about anti-aging than I do
  2. The ontology/organizational system should be improved a bunch
    1. There’s a reasonably high chance I messed up something obvious
  3. My “Other” section hides a lot of details on how to prioritize longevity over other questions, I’d like people to look into it a bunch.


Acknowledgments to Rethink Priorities for giving me a chance to write this for the work trial (and for hiring me), to Peter Hurford for quickly approving the public writeup, to Will Bradshaw and Matthew Barnett for having good anti-aging thoughts that I shamelessly stole from, and again to Will Bradshaw, Matthew Barnett and Sydney von Arx for reviewing this writeup. 

All mistakes and inaccuracies are, of course, their fault. Please feel free to comment if/when you identify mistakes, so I know who to yell at.


How I researched this

I knew very little about anti-aging beforehand. Some of my thoughts here are due to a single conversation with Will Bradshaw, an EA who has a PhD in the biology of aging and thought a bunch about the long-term effects. For this article specifically, I read the following posts:

Matthew Barnett: Effects of anti-aging research on the long-term future

Will Bradshaw’s comment

Anders Sandberg on 80000 Hours

Metaculus questions and comments
Some light Google searches

Other cause areas considered

For interesting causes to do within-cause cause prioritization, I briefly considered the following cause areas to talk about. 

  1. AI Safety.
  2. Biosecurity
  3. Pandemic Response
  4. Farm Animal Welfare
  5. Wild animal welfare
  6. Life extension
  7. Forecasting
  8. Outreach
  9. Civilizational stability
  10. Cause prioritization
  11. Non-existential GCRs

I rejected Farm and Wild Animal Welfare because I assumed that Rethink already had prior considerations of it, and I’d deliver little added value. I rejected AI Safety because I felt like there was sufficient work in the EA space already that it’s unlikely I can add much value without significantly more research. 

I rejected forecasting because I didn’t have a strong sense of what “crucial considerations” would even mean for forecasting, plus I didn’t think I could condense all my thoughts on it clearly in enough time. I was unsure about what the crucial considerations are in biosecurity, though I’m still optimistic about a light review on this topic for Pandemic Response (I currently suspect a lot of the crucial considerations is how much improved pandemic response is expected to help/hurt existential biosecurity). 

No comments on this post yet.
Be the first to respond.
More from Linch
Curated and popular this week
Relevant opportunities