S

sulodt

3 karmaJoined Sep 2016

Comments
5

The former; outreach is great. It would probably be better if you argued in the thread above to collect your thoughts in one place, since I share Ben Todd's opinion and he put it much better than I could. I enjoyed reading your well thought out post by the way!

Because we could work on more effective causes with these resources. See Michael's

The difference probably matters even more in some causes—I would posit that SCI probably does 10,000 to a million times more good than the best arts charity. That means if you can convince one person to give to SCI, that's as good as convincing 10,000 arts enthusiasts to make donations more effectively within the arts. One of these sounds a lot easier than the other.

Spreading EA thinking within domains is an idea for an intervention in the EA outreach cause. I don't think the good per unit time invested (=impact) can compete with already existing EA interventions

Yes, I should have phrased these things more clearly.

a) The evidence we currently have in this world suggests that the usual EA causes have an extraordinarily higher impact than other causes. That is the entire reason EA is working on them: because they do the most good per unit time invested.

Indeed there might be even better causes but the most effective way to find them is, well, to look for them in the most efficient way possible which is (cause prioritisation) research. Spreading EA-thinking in other domains doesn't provide nearly as much data.

b) I just meant that we probably won't be 100% sure of anything, but I agree that we could find overwhelming evidence for an incredibly high-impact opportunity. Hence the need for cause prioritisation research

Imagine chewing gum is an unbelievably effective cause: it's life-saving impact is many orders of magnitude higher than walking. If we want to maximise chewing gum to the fullest we cannot have any distractions, not even potential or little ones. Walking has opportunity costs and prevents us from extremely super effective gum chewing.

This piece is about how those resources can be collectively deployed most effectively, which is a different question from "how can I do the most good."

Michael's post still applies. Collective resources are just a sum of many individuals and everyone/every group contemplating their marginal impact ideally includes other EAs' work in their considerations. The opportunity cost bit applies both to individuals and groups (or the entire movement)

Any unit EA resource spent by x people has opportunity costs.

Finally, embracing domain-specific effective altruism diversifies the portfolio of potential impact for effective altruism.

There is no need for a more diverse portfolio. There is no evidence to suggest that there are causes higher in expected value than are being worked on. If anything, the most effective way to maximise the EA portfolio is by doing cause prioritisation research, but this already is one of the most impactful causes.

Even within the EA movement currently, there are disagreements about the highest-potential causes to champion. Indeed, one could argue that domain-specific effective altruist organizations already exist.

People have different values and draw different conclusions from evidence, but this is hardly an argument for branching out to further causes most people agree there is little high impact evidence for.

Take, for example, Animal Charity Evaluators (ACE) or the Machine Intelligence Research Institute (MIRI), both of which are considered effective altruist organizations by the Centre for Effective Altruism. Animal welfare and the development of “friendly” artificial intelligence are both considered causes of interest for the EA movement. But how should they be evaluated against each other? And more to the point, if it were conclusively determined that friendly AI was the optimal cause to focus on, would ACE and other animal welfare EA charities shut down to avoid diverting attention and resources away from friendly AI? Or vice versa?

If it were conclusively determined (unrealistic) that X (in this case AI) is better than Y (in this case animals), then yes everyone who can should switch, since that would increase their marginal impact.