Tom_Davidson

Posts

Sorted by New

Comments

The ITN framework, cost-effectiveness, and cause prioritisation

Thanks for this Halstead - thoughtful article.

I have a one push-back, and one question about your preferred process for applying the ITN framework.

1. After explaining the 80K formalisation of ITN you say

Thus, once we have information on importance, tractability and neglectedness (thus defined), then we can produce an estimate of marginal cost-effectiveness.
The problem with this is: if we can do this, then why would we calculate these three terms separately in the first place?

I think the answer is that in some contexts it's easier to calculate each term separately and then combine them in a later step, than to calculate the cost-effectiveness directly. It's also easier to sanity check that each term looks sensible separately, as our intuitions are often more reliable for the separate terms than for the marginal cost effectiveness.

Take technical AI safety research as an example. I'd have trouble directly estimating "How much good would we do by spending $1000 in this area", or sanity checking the result. I'd also have trouble with "What % of this problem would we solve by spending another $100?" (your preferred definition of tractability). I'd feel at least somewhat more confident making and eye-balling estimates for

  • "How good would it be to solve technical AI safety?"
  • "How much of the problem would we solve by doubling the amount of money/researchers in this area (or increasing it by 10%)?"
  • "How much is being spent in the area?"

I do think the tractability estimate is the hardest to construct and assess in this case, but I think it's better than the alternatives. And if we assume diminishing marginal returns we can make the tractability estimate easier by replacing it with "How many resources would be needed to completely solve this problem?"

So I think the 80K formalisation is useful in at least some contexts, e.g. AI safety.


2. In the alternative ITN framework of the Founders Pledge, neglectedness is just one input to tractability. But then you score each cause on i) the ratio importance/neglectedness, and ii) all the factors bearing on tractability except neglectedness. To me, it feels like (ii) would be quite hard to score, as you have to pretend you don't know things that you do know (neglectedness).

Wouldn't it be easier to simply score each cause on importance and tractability, using neglectedness as one input to the tractability score? This has the added benefit of not assuming diminishing marginal returns, as you can weight neglectedness less strongly when you don't think there are DMR.

Am I an Effective Altruist for moral reasons?

I found Nakul's article v interesting too but am surprised at what it led you to conclude.

I didn't think the article was challenging the claim that doing paradigmatic EA activities was moral. I thought Nakul was suggesting that doing them wasn't obligatory, and that the consequentialist reasons for doing them could be overridden by an individual's projects, duties and passions. He was pushing against the idea that EA can demand that everyone support them.

It seems like your personal projects would lead to do EA activities. So I'm surprised you judge EA activities to be less moral than alternatives. Which activities and why?

I would have expected you to conclude something like "Doing EA activities isn't morally required of everyone; for some people it isn't the right thing to do; but for me it absolutely is the right thing to do".

Against segregating EAs

Yeah good point.

If people choose a job which they enjoy less then that's a huge sacrifice, and should be applauded.

Against segregating EAs

But EA is about doing the most good that you can.

So anyone who is doing the most good that they could possibly do is being an amazing EA. Someone on £1million who donates £50K is not doing anywhere near as much good as they could do.

The rich especially should be encouraged to make big sacrifices, as they do have the power to do the most good.

The big problem with how we do outreach

I agree completely that talking with people about values is the right way to go. Also, I don't think we need to try and convince them to be utilitarians or nearly-utilitarian. Stressing that all people are equal and pointing to the terrible injustice of the current situation is already powerful, and those ideas aren't distinctively utilitarian.

Population ethics: In favour of total utilitarianism over average

There is no a priori reason to think that the efficacy of charitable giving should have any relation whatsoever to utilitarianism. Yet it occupies a huge part of the movement.

I think the argument is that, a priori, utilitarians think we should give effectively. Further, given the facts as they far (namely that effective donations can do an astronomical amount of good), there are incredibly strong moral reasons for utilitarians to promote effective giving and thus to participate in the EA movement.

I think that [the obsession with utilitarianism] is regretful... because it stifles the kind of diversity which is necessary to create a genuinely ecumenical movement.

I do find discussions like this a little embarrassing but then again they are interesting to the members of the EA community and this is an inward-facing page. Nonetheless I do share your fears about it putting outsiders off.

Are GiveWell Top Charities Too Speculative?

Those seem really high flow through effects to me! £2000 saves one life, but you could easily see it doing as much good as saving 600!

How are you arriving at the figure? The argument that "if you value all times equally, the flow through effects are 99.99...% of the impact" would actually seem to show that they dominated the immediate effects much more than this. (I'm hoping there's a reason why this observation is very misleading.) So what informal argument are you using?

Are GiveWell Top Charities Too Speculative?

This is a nice idea but I worry it won't work.

Even with healthy moral uncertainty, I think we should attach very little weight to moral theories that give future people's utility negligible moral weight. For the kinds of reasons that suggest we can attach them less weight don't go any way to suggesting that we can ignore them. To do this they'd have to show that future people's moral weight was (more than!) inversely proportional to their temporal distance from us. But the reasons they give tend to show that we have special obligations to people in our generation, and say nothing about our obligations to people living in the year 3000AD vs people living in the year 30,000AD. [Maybe i'm missing an argument here?!] Thus any plausible moral theory will such that the calculation will be dominated by very long term effects, and long term effects will dominate our decision making process.

Impossible EA emotions

Great post!

Out of interest, can you give an example of an "instrumentally rational technique that require irrationality"?

Load More