by Matt Wage, originally on the 80,000 Hours Blog

The haste consideration: resources for improving the world are vastly more valuable if you have those resources sooner.

I’ll first explain one way to see that the haste consideration is true, and then I’ll talk about one important implication of this consideration.

People who dedicate a large part of their life to strategically doing as much good as possible - i.e. effective altruists - are able to accomplish vastly more good than most people will. Unfortunately, not many people are effective altruists.

One way to try to improve the world would be to try to convince more people to be effective altruists. If you spent all of your efforts doing this, how long do you think it would take to convince one person who is at least as effective as you are at improving the world? For most people, if they’re strategic about it, I think they could do it in less than two years.

Now imagine two worlds:

(1) You don’t do anything altruistic for the next two years and then you spend the rest of your life after that improving the world as much as you can.

(2) You spend the next 2 years influencing people to become effective altruists and convince one person who is at least as effective as you are at improving the world. (And assume that this person wouldn't have done anything altruistic otherwise.) You do nothing altruistic after the next 2 years, but the person you convinced does at least as much good as you did in (1).

By stipulation, world (2) is improved at least as much as world (1) is because, in (2), the person you convinced does at least as much good as you did in (1).

Many people object to this. They think, “It’s possible that world (1) could be improved more than world (2) is. For example, world (1) be better if, in that world, you convinced 10 people to be effective altruists who are at least as good as you.” This is a natural thought, but remember that we are assuming that the person you convince in (2) is “at least as good as you are at improving the world”. This implies that if you convince 10 people in world (1), then the person you convinced in world (2) will do something at least as good as that. It’s true by definition that world (2) is improved at least as much as world (1) is.

There are two lessons we can take away from this. The first lesson is that influencing people to become effective altruists is a pretty high value strategy for improving the world. For any altruistic activity you’re doing, it might be useful to ask yourself, “Do I really think this will improve the world more than influencing would?”

The second lesson is that you can do more good with time in the present than you can with time in the future. If you spend the next two years doing something at least as good as influencing people to become effective altruists, then these two years will plausibly be more valuable than all of the rest of your life. In particular, these two years will be more valuable than any two-year period in the future. This is one way to see that the haste consideration is true.

One implication of the haste consideration: It’s plausible that how you spend the next few years of your life is more important than how you spend your life after that. For this reason, when choosing a career, you should pay special attention to how each career would require you to spend the next few years. For example, if a career would require you to spend the next few years studying in school and doing nothing altruistic, then this is a major cost of that career.

 

Part of Introduction to Effective Altruism 

Previous: Your Dollar Goes Further Overseas • Next: Preventing Human Extinction

Comments2


Sorted by Click to highlight new comments since:

So am I right to think that the point of the haste consideration is that there is a "return on investment" on doing good? Many good things you do will in turn cause other good things to happen and so good things done further in the past will have had more time to do good through indirect effects. If so, it would seem that it would be really important to think about what good things actually have these indirect effects.

Good point. The person in world 2 is, as you say, doing two things:

a) They start doing altruistic things right away. b) They focus on convincing others to join the EA movement, rather than on doing object-level altruistic work.

a) and b) are obviously unrelated in the sense that you can do a) without doing b), and vice versa. However, the combiation of doing both a) and b) is potentially quite powerful, as you point out.

An obvious (minor) caveat is, however, that succesful object-level altruistic works probably are necessary in order to attract people to the EA movement. You need something to show them, as it were. Hence all effective altruists devoting all of their time to recruiting new effective altruists is probably not the most efficient way of recruiting new effective altruists. That said, I agree with the general point that effective altruists probably should spend more time convincing others to join the EA movement.

More from Introduction
79
Introduction
· · 3m read
Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal