If you set out to maximise the welfare of people alive today, treating them all equally, you’ll end up doing some pretty weird things. Who’d have thought “doing the most good” boiled down to handing out cash to poor Kenyan farmers?
When people see effective altruists focused on cash transfers, distributing bed-nets and cheap medicine - and claiming they’re doing the most good - there’s a common reaction:
This looks naive and narrow : Sure these interventions help the immediate beneficiaries, but they hardly look like they’re solving the world’s greatest problems.
It looks like they’ve made the mistakes of ignoring small probabilities of big upsides, focusing only on concrete outcomes, and ignored the historical record (in which science, technology and better government are some of the main drivers of progress). It also looks like they’ve completely discounted common-sense do-gooding, which is not mainly focused on global health.
Now suppose you care about both the welfare of people today *and* helping people in the future. If you care about the future, you’ll want to make investments in technology and economic growth that will pay off later. You’ll also want to make sure society is in a position to navigate unpredictable future challenges. This will mean better global institutions, smarter leaders, more social science, and so on. And it’s hard to know which of these are most pressing.1
Overall, this menu of global priorities looks much closer to common-sense efforts to make a difference. In this way, long-run focused effective altruism ends up looking more common-sense than efforts just focused on helping present generations.2
Long-run focused effective altruism is often seen as even less common-sense than the short-run focused version. But it doesn’t have to be that way. Long-run focused effective altruism only becomes unintuitive when taken to an extreme and combined with further non-common-sense beliefs, such as the belief that reducing existential risk is the best way to aid the future, and within that, the belief that artificial intelligence is the most pressing existential risk.3
Because long-run focused effective altruism is associated with these further weird positions, it’s often downplayed when speaking to new people, in favor of short-run effective altruism (malaria nets and so on). I propose that it will be better, especially for people who are already engaged with making a difference, to introduce them first to *moderate long-run focused effective altruism* rather than the short-run focused version. It’s more intuitive and reasonable sounding.
The reason this doesn’t happen already, I think, is that people aren’t sure how to explain moderate long-run focused effective altruism - it’s much easier to say “malaria nets” and direct someone to (traditional) GiveWell. But in the last year it has become much easier to explain. When it comes to picking causes, emphasise that effective altruists take a strategic approach. Yes, they consider their personal passions, but they also try to work on causes that are important, tractable and neglected. Explain that the most important causes are the ones that do the most to build a flourishing society now and in the long-run. Then give several examples: yes there’s global health (especially good on tractability), but there’s also global catastrophic risks (good on importance and neglectedness); scientific research, penal reform, and much else. Link them the Open Philanthropy Project, 80,000 Hours and the Copenhagen Consensus.
In conclusion, short-run effective altruism is often favored as more intuitive and better for introducing to new people compared to long-run focused effective altruism, because long-run focused effective altruism is often associated with further weird positions. However, a more moderate and uncertain long-run focused effective altruism is actually the most reasonable sounding position.
* * *
1 Of course, interventions which maximise short-run welfare might *also* happen to be the best-way to help the long-run future, but that’s a topic for a different day.
2 It also looks more common-sense because it involves less certainty. It’s very hard to know what the long-run effects of our actions are, so long-run focused effective altruism tends to work with a broader range of causes than the short-run focused version.
3. In fact, even if you believe both of these things, once the low-hanging fruit with friendly AI research etc. are used up, you’re going to then focus on common-sense causes like international collaboration.
Moderate long-run EA doesn't look close to having fully formed ideas to me, and therefore it seems to me a strange way to introduce people to EA more generally.
I don't understand this. Is there an appropriate research fund to donate to? Or are we talking about profit-driven capital spending? Or just going into applied science research as part of an otherwise unremarkable career?
Who knows how to make economies grow?
What is a "better" global institution, and is there any EA writing on plans to make any such institutions better? (I don't mean this to come across as entirely critical -- I can imagine someone being a bureaucrat or diplomat at the next WTO round or something. I just haven't seen any concrete ideas floated in this direction. Is there a corner of EA websites that I'm completely oblivious to? A Facebook thread that I missed (quite plausible)?)
I have even less idea of how you plan to make better politicians win elections.
More social science I can at least understand: more policy-relevant knowledge --> hopefully better policy-making.
Underlying some of what you write is, I think, the idea that political lobbying or activism (?) could be highly effective. Or maybe going into the public service to craft policy. And that might well be right, and it would perhaps put this wing of EA, should it develop, comfortably within the sort of common-sense ideas that you say it would. (I say "perhaps" because the most prominent policy idea I see in EA discussions -- I might be biased because I agree with and read a lot of it -- is open borders, which is decidedly not mainstream.)
But overall I just don't see where this hypothetical introduction to EA is going to go, at least until the Open Philanthropy Project has a few years under its belt.
I’d also find it helpful to know the answers to these questions. In particular, to compare like with like, it would be interesting to know how advocates of long-run focused interventions would recommend spending a thousand dollars rather than funding, say, bednet distribution.
This is a key action-relevant question for me and others. I’ve asked quite a few people, but haven’t yet heard an answer that I’ve personally been impressed by. I also haven’t been given many specific charities or interventions, which leaves the argument in the realm of intellectually interesting theory rather than concrete practicality. Of course this isn’t to say that there aren’t any, which is why I ask! (I have made an effort to ask quite a few far-future focused people though.)
(I know some people advocate saving your money until a good opportunity comes up. Paul has an interesting discussion of this here.)
I agree, and I'd add that what I see as one of the key ideas of effective altruism, that people should give substantially more than is typical, is harder to get off the ground in this framework. Singer's pond example, for all its flaws, makes the case for giving a lot quite salient, in a way that I don't think general considerations about maximizing the impact of your philanthropy in the long term are going to.
That's true, though you can just present the best short-run thing as a compelling lower bound rather than an all considered answer to what maximizes your impact.
To clarify, I was defining the different forms of EA more along the lines of 'how they evaluate impact', rather than which specific projects they think are best.
Short-run focused EA focuses on evaluating short-run effects. Long-run focused EA also tries to take account of long-run effects.
Extreme long-run EA combines a focus on long-run effects with other unintuitive positions such as a focus on specific xrisks. Moderate long-run EA doesn't.
The point of moderate long-run EA is that it's much less clear which interventions are best by these standards.
I wasn't trying to say that moderate long-run EA should focus on promoting economic growth and building better institutions, just that these are valuable outcomes, and it's pretty unclear that we should prefer malaria nets (which were mainly selected on the basis of short-run immediate impact) to other efforts to do good that are widely pursued by smart altruists outside of the EA community.
A moderate long-run EA could even think that malaria nets are the best thing (at least for money, if not human capital), but they'll be more uncertain and give greater emphasis to the flow through effects.
Yes, moderate long-run EA is more uncertain and doesn't have "fully formed" answers - but that's the situation we're actually in.
EA's haven't been as substantially involved in science funding, but it's a pretty common target for philanthropy. And many people invest in technology, or pursue careers in technology, in the interests of making the world better. My best guess is that these activities have a significantly larger medium term humanitarian impact than aid. I think this is a common view amongst intellectuals in the US. We probably all agree that it's not a clear case either way.
The story with social science, political advocacy, etc., is broadly similar to the story with technology, though I think it's less likely to be as good as poverty alleviation (or at least the case is more speculative).
Note that e.g. spending money to influence elections is a pretty common activity, it seems weird to be so skeptical. And while open borders is very speculative, immigration liberalization isn't. I think the prevailing wisdom is that immigration liberalization is good for welfare, and there are many other technocratic policies in the same boat, where you'd expect money to be helpful.
It seems like this comes down to a distinction between effective altruism, meaning altruism which is effective, and EA referring to a narrower group of organizations and ideas. I am more interested in the former, which may account for my different view on this point. The point of the introduction also depends on who you are talking to and why (I mostly talk with people whose main impact on the world is via their choice of research area, rather than charitable donations; maybe that means I'm not the target audience here).
I'm happy to go with your former definition here (I'm dubious about putting the label 'altruism' onto something that's profit-seeking, but "high-impact good things" are to be encouraged regardless). My objection is that I haven't seen anyone make a case that these long-term ideas are cost-effective. e.g.,
Has anyone tried to make this case, discussing the marginal impact of an extra technology worker? We'd agree that as a whole, scientific and technological progress are enormously important, and underpin the poverty-alleviation work that we're comparing these longer-term ideas to. But, e.g., if you go into tech and help create a gadget, and in an alternative world some sort of similar gadget gets released a little bit later, what is your impact?
The answer to that last question might be large in expectation-value terms (there's a small probability of you making a profoundly different sort of transformative gadget), but I'd like to see someone try to plug some numbers in before it becomes the main entry point for Effective Altruism.
When Ben wrote "smarter leaders", I interpreted it as some sort of qualitative change in the politicians we elect -- a dream that would involve changing political party structures so that people good at playing internal power games aren't rewarded, and instead we get a choice of more honest, clever, and dedicated candidates. If, on the other hand, electing smarter leaders it means donating to your preferred party's or candidate's get-out-the-vote campaign... well I would like to see the cost-effectiveness estimate.
(Ben might also be referring to EA's going into politics themselves, and... fair enough. I doubt it'll apply to more than a small minority of EA's, but he only spent a small minority of his post writing about it.)
I think this is reasonable, and expectation-value impact estimates should be fairly tractable here, since policy wonks have often done cost-benefit analyses (leaving only the question of how much marginal donated dollars can shift the probability of a policy being enacted).
Overall I still feel like these ideas, as EA ideas, are in an embryonic stage since they lack cost-effectiveness guestimates.
"Long-run focused effective altruism is often seen as even less common-sense than the short-run focused version."
I'd say that it is less 'common sense' as such, in terms of principles, although I agree that taking into account factors like economic/technological growth and the sustainability of civilization might lead recommending some interventions that are more broadly supported. That would be something of a coincidence, and there may also be very outlandish recommendations..
On the intuitiveness of principles, there are a many factors that separately contribute to people's intuitions.
A big part of 'common sense' for many people is focus on their own communities, and so neglect of physically and socially distant foreigners. Things like nuclear disarmament or scientific research will be less counterintuitive to many rich country citizens because of the visible impact on the welfare of their own communities.
A different but related angle is mutualism: cooperating on Prisoner's Dilemma, contributing to public goods in a way that benefits everyone, versus one-sided transfers. The costs of cutting carbon emissions, or boosting scientific research, could be allocated around the world such that everyone wins. For transfers and health aid to the poorest the mechanisms for mutual benefit are weaker and harder to implement (although possible). Immigration with taxes and transfers to approach Pareto-improvement may fit in better with the mutualistic framework.
Intuitions about sustainability over time and high average standards of living are more common than linear concern with population size (intrinsically, for a given standard of living and instrumental import).
In some cases these will tend to coincide with a long-run welfare view, and in other cases with a short-run welfare view.
This is the approach I take when talking to people about effective altruism, based partly on caring more about it, and partly on the observation that it goes over better with some audiences (e.g. academics, with whom I have a lot of contact).
It also seems surprising that people go straight from poverty to existential risk reduction, often without pausing in the middle to appreciate the massive long-term humanitarian impacts of tech progress. (I got here via poverty, then concern about faster tech progress for humanitarian benefits, and then only after a long time became interested in existential risk.) As others have pointed out, I think this is a bit of a historical coincidence.
I think what this post highlights for me is that we (or at least I) would like to see more work done on "donation risk" within the EA community, and how we allocate and make giving investments in causes with potentially large effects but uncertain upsides.
For short run causes, we basically accept almost no donation risk - you must show that your organisation uses all the money effectively towards a cause that actually works and delivers the upside you have promised. Even slight delays or hiccups (like AMF struggling to allocate funds this year) are sufficient to prompt donors to look elsewhere. For extremely long run causes my perception is the exact opposite, but perhaps I should not comment, because I don't support most current work into them.
What about medium run causes, should we look at donation risk like the short-run or do we treat them more like the extreme long run? For many such cases - a big difference is that an organisation like 80,000 hours seems only to have have a lack of evidence because it is new. If/when it becomes more mature and its students go on to take or not take their advice, a better picture of its effectiveness might emerge. Likewise with projects like the OPP where we may well get a better picture of the good it is doing over time, so I am quite positive about them in a venture capitalist sort of way.
Medium run erisk and political projects face donation risk difficulties though. We can see the upsides are big but showing our contribution actually mattered and that we are doing the best job working towards it is very hard.
Something like the efficient frontier in modern portfolio theory comes to mind, where you have a tradeoff between risk and return. We have nowhere near enough data, and don't have a uniform metric of return, as the $/DALY seems too controversial, but I can still dream.
This is only anecdotal evidence, but I and a few others who've tried pitching people on these interventions haven't got that reaction. More broadly, I'm curious as to what your evidence is that this is contrary to "common-sense do-gooding". Many non-EAs I know find it common sense, and say that they knew that bednets were a great giving opportunity, and that they've thought that they should donate to them before I suggested it.
That is the historical record, but it's not obvious that thinking that we should give to global poverty charities ignores it or is refuted by it. You'd need to make an argument that a particular, individual sort of work that was available in the past did the most good, and that this is good evidence that an analogous sort of work (e.g. creating a generic tech start-up) will do more good than spending those resources on deworming or bednets or cash transfers.
More broadly, there is an interpretation of common sense which is cautious, empirical, friendly to global poverty charities, and sceptical about at least some x-risk interventions. But I suspect that it's fruitless to debate whether this interpretation is correct or not. People can mean many different things by the phrase "common sense", and these will often bear a distinct resemblance to what the person thinks themself. (We could try to work out what the average person thinks, or would think after suitable reflection. It's not obvious how much weight we should give to that, but it's certainly worth taking into account to some extent. I suspect that they're open to seeing global poverty charities as the best, and wouldn't be sold on many particular far future interventions, but I really don't know.)
My main evidence is that these things are only supported by a relatively small proportion of other groups that contain some people who care a great deal about making a difference e.g. people involved in international development, social entrepreneurs, tech entrepreneurs who care about impact, the non-profit sector, some academics, people who work at the UN, etc.
Also, it seems clear that existing altruistic communities regard a much wider range of projects as plausibly high impact, and think it's weird to focus on just one narrow area.
I think GWWC would also agree that objections along the lines of "what about the long-run or systemic effects" are some of the most common reactions to pitching AMF etc.
What's the difference between moderate and long term EA? I'm guessing x-risk would be long term and perhaps some kind of research, medium term?
As your second footnote suggested, I think that the short vs long term debate really boils down to a low vs high risk one. Just like people have different tastes for risk level in their financial investments, so to in their philanthropic investments. Long term goals such as developing technology or x-risk I don't think anyone thinks are unimportant, they're simply unpredictable (high risk), whereas GiveWell-type charities don't fix systemic probs but more “safe.”
Related to #2 is diversification. People diversify their financial investments because low risk doesn't yield that much whereas high yield is also high risk, so they hold various levels of risk in their portfolio (or even within the same risk level, it's lower risk to have multiple investments, of course). This even makes sense if someone wanted to hold only high risk (long term) donations in their philanthropic portfolio: they may feel that putting all their money on, for instance, fighting corruption, is a long shot, so may feel comfort in giving half their donations to promoting morals. In this case, the risk level is still high, but it will give the donor the psychological comfort that he has TWO chances of improving the world, rather that just one!
You associate long term EA with weirdness, and I agree that the public would see AI, and some other forms of x-risk that way, but there are so many other long-term high impact pursuits that are not weird: research on behavioural economics; designing technologies that help the poor along with their distribution and marketing systems; decreasing corruption including political reform; restructuring our economic and monetary systems to make them more fair and egalitarian; promoting more moral or sustainable lifestyles like veganism; lobbying; green tech; medical tech. I don't think people would find any of these weird.
As someone on the forum stated earlier, people tend to be more motivated to make the world better, rather than deal with sad things like extreme poverty. I think that's probably true. When promoting EA, would it not be ideal to have a little “something for everybody”? ie. For those with bleeding hearts, immediate measures for helping the global poor; for tech-oriented people developing high-impact technologies; for “save the world from injustice” types, combating corruption. It's unacceptable to me that someone would reject philanthropy or direct EA altogether because the person teaching her about it was dismissive of her place on the risk-type-duration spectrum. We should be empowering everyone, not trying to get them to conform to a specific form of EA!
See my reply to pappubahry above. The distinction is between (i) short-run EA (ii) moderate long-run EA and (iii) extreme long-run EA, not short vs. medium vs. long. I agree this is confusing, sorry!
Also, I don't think the distinction boils down to high-risk vs. low-risk. It's more about what kinds of evidence you use, and maybe some questions about values too.
I get the general sense that the field of development economics is starting to ask the bigger questions again, often using different techniques than randomization. This is from an interesting blog post I read recently:
This type of criticism has been around for a long time (Rodrik, Blattman), but it seems to be gaining more traction now.
Great point Ben!
A lot of what is now the effective altruism movement was fed from the community focused on rationality formed around Less Wrong, who were already largely concerned with the risks of superintelligent A.I. As they were also the section of effective altruism most concerned with all of the far future, concerns about superintelligent A.I. dominated as an example of an effective focus area aside from charity focused on the very short-run like charities recommended by Givewell. Frankly, before this point I'm not sure that we had good examples of "moderate long-run effective altruism" aside from those generated in this post. Indeed, the collaborative projects between different organizations you mentioned are doing pioneering research into global prioritization. I believe this may be a sign the movement is growing, not in the sense of having greater numbers and popular appeal, but in terms of what its learning. Hopefully the latter type of growth will lead to more of the former as what the movement is doing becomes more commonly sensible.
When I started reading this, I thought it would be an essay plugging concerns and interventions for the far future without being upfront about it, so I didn't think I would like it. However, it ended up being different than I thought, and I liked it. It was about a new tact for introducing people to effective altruism, which is one I may try myself.