Hide table of contents

The question used to be: “what is the most cost-effective way to help people?”

Tl;dr: There's a lot of money in longtermism, so people seem not to find that a useful question anymore. What's the new animating question that keeps us focused and accountable to the mission?

This is a call to Babble and throw out ideas and see what comes up. 

The full version: 

The question used to be: “what is the most cost-effective way to help people?”

But now there’s all this money, and a stronger orientation to questions with more speculative, less empirical answers, both of which make the original question much less useful.

Now:

  • EA can focus on maximum impact rather than cost effectiveness (I’ve seen this argument somewhere on the forum but can't find it and when I tried to write it out I don’t actually understand it. My understanding is that if project X does 10 units of good for $100 but can't be scaled up and Y can do 100 units of good for $2000 then maybe you want to do Y. But I think if you're not thinking about portfolio expansion or diversification, you fund all of X first and then Y is the most cost effective thing around?)
  • The argument for hits-based giving gets even stronger - try lots of things, fund lots of projects, be extremely speculative, be willing to try weird things and just fund interesting, smart people to do whatever, because who knows what will work and we don’t have a better idea and maybe you need to be in extreme Explore mode right now so as not to think too narrowly or optimize too early
     

As a result

  • I don’t know what to think when I encounter people spending money in a way that seems silly to me; “cost-effectiveness” feels like it’s been taken off the table as a metric, leaving me thinking that there’s no accountability anywhere, like the epistemic forcing function disappeared when the money bottleneck did.
  • What keeps us morally and epistemically honest? Feedback loops suck as it is, it’s ok that we’re trying to field-build and skill-build and try things that don’t give obvious successes right away, but then what keeps us from being a very earnest group of people who get nothing done? Or get a lot done that doesn’t matter through frenetic effort and wasted motion?
  • And even worse, what keeps us from sucking up tons of talent and interest because we’re where the interesting ideas and the money are and everything comes through us and then we don’t save the world and also the people who could have don’t either because we made all our mistakes correlate?
     

These are big, thorny questions, and I have some rudimentary answers to some of them, but what I’m looking for here is, what’s the new question?

What replaces “is that the most cost-effective way to help people” that reflects the position longtermism and some other parts of EA are in but keeps us accountable and keeping our eye on the prize?

What should EAs be asking themselves when they reflect on their work, or when they look out at the world and assess? What should come up in conversation? What should people ask themselves when they bounce against EA?

I think I would personally find it helpful, as a north star, to have a new question, to orient and guide and push myself. 

My extremely preliminary ideas + some from Nicole Ross (thank you Nicole!):

  • “What is the most cost-effective way to do good?” might be just fine - I’m still mulling over a longtermist who finds current human and animal suffering extremely emotionally affecting telling me, about a donation that “I think this money could save four lives and I think you should do this instead.” 

    Cost-effectiveness is still real, if we thought we could do better in expectation by throwing all the money at bednets, we should still do that. We aren’t doing that because we think we have a better idea, which is the only reason not to do that.
    • NB: This is really hard to work with for individual cases
  • The deeper original question: “How do you do the most good?”. It’s not very actionable, but it still might serve.
  • “What’s my story for how this saves the world / massively improves humanity?” which I like for its built-in call for epistemics and iteration and telling the story out loud and noticing its flaws and improving it and figuring out if it’s even true.
    • Related: “Does this seem like the kind of thing that’s part of a story where we win?”
  • “In twenty years, will I be happy I had a policy of spending money this way?” - activating our hindsight-in-advance, premortem-type thinking, but still caring about the overall policy and not getting caught up in each minute decision.
  • How is this spending leading to big x-risk reduction wins in the real world?
  • How might this money otherwise be spent? What are some things that might be better? What are some things that might be roughly equal to this opportunity in terms of doing good?
  • Will this seem like an obviously bad decision in hindsight?
  • Does this pass the red face test? Will you be happy to defend this decision even if the bet doesn't pay off?

95

0
0

Reactions

0
0
New Answer
New Comment

7 Answers sorted by

How do we most efficiently allocate limited resources to do the most good?

This has several advantages in the current regime over "what is the most cost-effective way to help people?":

  1. "Cost-effective" can in theory be inclusive of all costs, but in practice the framing points people too quickly to thinking of financial costs, whereas plausibly we should worry about other costs more (most obviously EA human capital, but also plausibly stuff like branding, connections, human capital of adjacent people, etc).
  2. "most cost-effective way" implies a bit of a single-player game, whereas maybe group coordination and total allocation is a more important framing.
  3. "helps people" may run into issues with population ethics, not sure.

But I recognize this is still mostly a refinement of "what is the most cost-effective way to help people?" and maybe it's better to think of a more radically different question, as the prompt and some of the other answers may have suggested.

I would add that we should be trying to increase the pool of resources. This includes broad outreach like Giving What We Can and the 80k podcast, as well as convincing EAs to be more ambitious, direct outreach to very wealthy people, and so on.

 

I would even replace "efficient" with "effective". I think efficiency can also imply cost-effectiveness whereas effectiveness is a bit broader (which may not always be better) but feels a bit more accurate

I interpret "what is the most cost-effective way to help people?" and "How do we most efficiently allocate limited resources to do the most good?" as equivalent.

4
Linch
2y
I think in practice these questions will point people in different ways, not sure. (For example I noticed all the examples your comments mention to be referencing financial costs, which I think is maybe due to suboptimal priming).

This doesn't exactly answer your question, but it answers your prompt to babble, and substituting "what is the new EA question" with "what have I been thinking about recently that sounds similar?" seems like a nice babble. So: "How do we do the most good" could now be:

  • "How do we get expected value calculations in practice?"
  • "Once we have expected value calculations in place, what is the highest expected value thing to do?"
  • "What do we do in the meantime?"

If we start out trying to optimize money directed to global health, we can sort of get expected value calculations, by doing the hard research that GiveWell does, and hoping that nothing too suboptimal happens if we take their expected life saved numbers literally, or at least if our decision is to fund charities in decreasing order of estimated cost-effectiveness.

But once we accept that we could get higher expected impact by moving to more speculative cause areas, the number of different actions we can take becomes really large, and the way to choose between them becomes much more fuzzy, and relies much more on human judgment.

Like, it's not like we can tell that "going into AI alignment improves all that is of value by 0.0001% (in expectation), whereas going into EA movement building improves all that is of value by 0.003% (in expectation), so you should definitely try both, see if the difference in fit is more than 30x, and if not, choose the second."


As aside, I think that the central, or meaningful split in EA right now is not exactly between near- and long- termists. It's rather between looking for, in a sense, more certain promises of impact, and being willing to take more uncertain gambles if we think that they're worth it in expectation. But that seems mostly distinct from inter-temporal preferences.

I mostly believe that being willing to take speculative gambles is the correct thing to do. But this doesn't map neatly to the near/longtermist split. E.g., the work of OPIS on reducing the most intense forms of suffering today (e.g., cluster headaches using psilocybin) feels somewhat out-there in terms of speculativeness, but it is also pretty near-termist. Charity Entrepreneurship, and in particular creating a charity through them, rather than donating, might also be another good example of something which is both speculative and near-term.


So anyways, once we lose the ability to rely on GiveWell, which optimizes global health and development donations, because we just want to generally optimize actions, we can have a few types of answers: 

Both of these are tricky. I'm working on the second. I don't expect total success, and partial success seems like it would still require much more human judgment than GiveWell. 


I don't think that not having EV calculations means that "anything goes because we don't have expected value calculations". For instance, we can do sanity checks to determine that one option generally dominates another, and as we get better models of the world, we can do more such sanity checks.

So one answer to "What keeps us morally and epistemically honest?" can be "evaluations". Evaluations that cannot approach GiveWell in all their glory, but which can still point out missing or invalid parts of a pathway to impact, or options that are dominated by others, and still be really biting.


This feels like it is a somewhat personal answer. It reframes EA as kind of a research project (I'm a researcher), and paints quantitative methods as one possible solution to that research project (I like quantitative methods.) I'm curious to compare it to what others say.

What's on the critical path to doing good?

The reason explicit cost-effectiveness framings should be deemphasized is that money is not the primary bottleneck-- except in global health, there is enough money that it's difficult to determine the cost-effectiveness bar because we don't have all the ideas or megaproject infrastructure required to spend cost-effectively. Naturally, the critical path question emphasizes two of the current bottlenecks: time and project management. Time until AGI is now a critical resource, as is time until biotech capabilities, until we cure malaria anyway, and so on. Coordination is the other scarce resource, and project management is the correct framing for solving coordination issues in many (but not all) cases. 

If your primary goal is reducing x-risk, you could ask "what's on the critical path to saving the world?" which I think is slightly better.

Isn't this saying that existing interventions have low cost-effectiveness, and hence we should invest in creating new projects that could outperform them?

1
Thomas Kwa
2y
I think that's missing the point. If we increase cost-effectiveness of all projects by 20%, we'll be doing much less good than if we increase time-effectiveness (speed up all projects) by 20%. While there is a limited amount of it, money is no longer the most important constraint, so it shouldn't hold a special place as the resource we're trying to maximize use of.
2
Michael_Wiebe
2y
Right, we should reframe the optimization problem to include both a budget constraint and a time constraint.

Maybe a slight shift from “what is the most cost-effective way to help people?” to “what are cost-effective ways to help people at scale?”. I think we can set a bar of ~8x GiveDirectly and then aim to fund everything that meets that bar. I write about this in "In current EA, scalability matters".

I don't think that longtermism has changed the target: expected marginal utility per dollar, aka cost-effectiveness. The difficulty with applying this to interventions with long-term effects is measuring those effects. I think it's a mistake to throw out the ITN framework; instead, we should try harder to measure effects.

There's also the difficulty of putting everything on the same scale, eg. animal and human lives, or present and future lives.

How can we best allocate our limited resources to improve the world? Sub-question: Which resources are worth the effort to optimise the allocation of, and which are not, given that we all have limited time, effort and willpower?

I find this framing most helpful. In particular, for young people, the most valuable resource they have is their future labor. Initially, converting this to money and the money to donations was very effective, but now this is often outcompeted by working directly on high priority paths. But the underlying question remains. And I'd argue we often reach the point where optimising our use of money, as it manifests as frugality and thrift, is not worth the willpower and opportunity costs, given that there's a lot more money than vetting capacity or labor. (Implicit assumption: thrift has cost and is the non default option. This feels true for me but may not generalise)

[A quick babble based on your premise]

What are the best bets to take to fill the galaxies with meaningful value?

How can I personally contribute to the project of filling the universe with value, given other actors’ expected work and funding on the project?

What are the best expected-value strategies for influencing highly pivotal (eg galaxy-affecting) lock-in events?

What are the tractable ways of affecting the longterm trajectory of civilisation? Of those, which are the most labour-efficient?

How can we use our life’s work to guide the galaxies to better trajectories?

Themes I notice

  • Thinking in bets feels helpful epistemically, though the lack of feedback loops is annoying
  • The object of attention is something like ‘civilisation’, ‘our lightcone’, or ‘our local galaxies’
  • The key constraint isn’t money, but it’s less obvious what it is (just ‘labour’ or ‘careers’ doesn’t feel quite right)
Comments6
Sorted by Click to highlight new comments since: Today at 1:11 AM

In a sense, I'd say the question still is “What is the most cost-effective way to do good?”. But if you have a lot of money and you only spend on interventions that are individually cost-effective but cheap, then you're not going to spend most of your money. And that can (depending on further assumptions) reduce your overall cost-effectiveness. It's more important to make sure to make big investments when you have a lot of money, obviously.

Why not allocate your budget based on expected marginal utility per dollar? This automatically accounts for diminishing returns, since such interventions will have diminishing marginal utility per dollar, and won't be allocated the next dollar if there's a better intervention. With this decision rule, you don't need to worry about how big or small your investments are.

One problem with this approach: if there are transaction costs to identifying, evaluating, and funding interventions, then you would optimally ignore interventions that are below some scale. (And we can think of scalability as lack of diminishing returns.)

What do you mean by "individually cost-effective"? Are you trying to make a point about synergies between interventions?

Not sure if others feel this way, but for me, the question never really was "what's the most cost effective way to help people" - it has always been closer to Linch's suggestion, and one of the sub specifications of that question was the question of cost effectiveness. 

What factors do you see distinguishing the two framings?

Curated and popular this week
Relevant opportunities