richard_ngo

AI safety research engineer at DeepMind (all opinions my own, not theirs). I'm from New Zealand and now based in London; I also did my undergrad and masters degrees in the UK (in Computer Science, Philosophy, and Machine Learning). Blog: thinkingcomplete.blogspot.com

Sequences

EA Archives Reading List

Comments

The case of the missing cause prioritisation research

Thanks for making this post, I think this sort of discussion is very important.

It seems to me (predictably given the introduction) that far and away the most valuable thing EA has done is the development of and promotion of cause prioritisation as a concept.

I disagree with this. Here's an alternative framing:

  • EA's big ethical ideas are 1) reviving strong, active, personal moral duties, 2) longtermism, 3) some practical implications of welfarism that academic philosophy has largely overlooked (e.g. the moral importance of wild animal suffering, mental health, simulated consciousnesses, etc).
  • I don't think EA has had many big empirical ideas (by which I mean ideas about how the world works, not just ideas involving experimentation and observation). We've adopted some views about AI from rationalists (imo without building on them much so far, although that's changing), some views about futurism from transhumanists, and some views about global development from economists. Of course there's a lot of people in those groups who are also EAs, but it doesn't feel like many of these ideas have been developed "under the banner of EA".

When I think about successes of "traditional" cause prioritisation within EA, I mostly think of things in the former category, e.g. the things I listed above as "practical implications of welfarism". But I think that longtermism in some sense screens off this type of cause prioritisation. For longtermists, surprising applications of ethical principles aren't as valuable, because by default we shouldn't expect them to influence humanity's trajectory, and because we're mainly using a maxipok strategy.

Instead, from a longtermist perspective, I expect that biggest breakthroughs in cause prioritisation will come from understanding the future better, and identifying levers of large-scale influence that others aren't already fighting over. AI safety would be the canonical example; the post on reducing the influence of malevolent actors is another good example. However, we should expect this to be significantly harder than the types of cause prioritisation I discussed above. Finding new ways to be altruistic is very neglected. But lots of people want to understand and control the future of the world, and it's not clear how distinct doing this selfishly is from doing this altruistically. Also, futurism is really hard.

So I think a sufficient solution to the case of the missing cause prioritisation research is: more EAs are longtermists than before, and longtermist cause prioritisation is much harder than other cause prioritisation, and doesn't play to EA's strengths as much. Although I do think it's possible, and I plan to put up a post on this soon.

EA reading list: population ethics, infinite ethics, anthropic ethics

I don't think variable populations are a defining feature of population ethics - do you have a source for that? Sure, they're a feature of the repugnant conclusion, but there are plenty more relevant topics in the field. For example, one question discussed in population ethics is when a more equal population with lower total welfare is better than a less equal population with higher total welfare. And this example motivates differences between utilitarian and egalitarian views. So more generally, I'd say that population ethics is the study of how to compare the moral value of different populations.

EA reading list: population ethics, infinite ethics, anthropic ethics

To me they seem basically the same topic: infinite ethics is the subset of population ethics dealing with infinitely large populations. Do you disagree with this characterisation?

More generally, this reading list isn't strictly separated by topic, but sometimes throws together different (thematically related) topics where the reading list for each topic is too small to warrant a separate page. E.g. that's why I've put global development and mental health on the same page (this may change if I get lots of good suggestions for content on either or both).

Objections to Value-Alignment between Effective Altruists

Yepp, that all makes sense to me. Another thing we can do, that's distinct from changing the overall level of respect, is changing the norms around showing respect. For example, whenever people bring up the fact that person X believes Y, we could encourage them to instead say that person X believes Y because of Z, which makes the appeal to authority easier to argue against.

Objections to Value-Alignment between Effective Altruists

The thing I agree with most is the idea that EA is too insular, and that we focus on value alignment too much (compared with excellence). More generally, networking with people outside EA has positive externalities (engaging more people with the movement) whereas networking with people inside EA is more likely to help you personally (since that allows you to get more of EA's resources). So the former is likely undervalued.

I think the "revered for their intellect" thing is evidence of a genuine problem in EA, namely that we pay more attention to intelligence than we should, compared with achievements. However, the mere fact of having very highly-respected individuals doesn't seem unusual; e.g. in other fields that I've been in (machine learning, philosophy) pioneers are treated with awe, and there are plenty of memes about them.

Members write articles about him in apparent awe and possibly jest

Definitely jest.

Systemic change, global poverty eradication, and a career plan rethink: am I right?

Point taken, but I think the correlation in China is so much larger than the correlation between African countries (with respect to the things we're most interested in, like the effects of policies) that it's reasonable to look at the data with China excluded when trying to find a long-term trend in the global economy.

Load More