This just came out in Current Affairs Magazine. It is a polemic, pretty hack-y, written from a bias in favour of socialism (as a better way of effecting change - at least for currently-alive humans). It has the usual out-of-context quotes of Ord, MacAskill, Bostrom, and cites Phil Torres and Timnit Gebru. One for the files/conversation on how to deal with external criticism.
But it had a couple of more substantive points:
- the dismissal of AGI x-risk is unhelpful but not surprising and there is probably little overlap between the alignment crowd and this magazine's readers (I got it through the FT's Alphaville blog which is really good) so I doubt it's actively harmful. I think the efforts to push back and make the case are good (though that isn't a consensus, see this post and comments). FWIW I tried to write my reasons for disagreeing with another alignment-skeptical tech commentator.
- the Erik Hoel essay is worth a read as it more rigorously examines EA as a philosophy while essentially agreeing with certain recommendations on how to behave in (Hoel's) life. See also this EAF post though there aren't many comments there atm.
- much of the factually-verifiable or changeable criticism of EA/longtermism/AI/etc revolves around the 'white male' critique. It would be great to have a set of statistics assessing this, if indeed EAs think it is actually an issue. For instance, I just did the AGI Safety Fundamentals course on both technical and governance tracks, and thought my cohorts were pretty diverse (one leader was non-white male, the other white female, and non-white participant % in the technical cohort was 50% and in the governance cohort 20%). In the alignment world, female thought-leaders seem well-represented (Ajeya Cotra, Katja Grace, Vanessa Kosoy, Beth Barnes off the top of my head)
- related to (3), I think the 'white male' thing (presumably a hangover of Ord, Bostrom, MacAskill, Russell, Tegmark, and Christian having written all the highest-profile works so far) might ease with time and a little effort - for instance, going around to magnet (pre-undergrad) schools with high POC representation in (say) SF, NY, London, Paris, etc., pitching AI x-risk (for example, as something students might find more obviously interesting and less abstract/contentious than longtermism/EA...engineered pandemics is another possibility). Obviously an earlier step is to develop a 'curriculum' or just an accessible talk that is politically acceptable in an educational environment, and groundwork with Ofsted or equivalent (US is more difficult as regulation is devolved to state/local level so probably fewer economies of scale).
- the ranking of climate change as a second-order problem is understandable (based upon my reading of Ord, MacAskill, or this post) but it isn't a good look given the general public's concern (which is obviously amplified for countries with relatively low income or developmental status, or simply in more expopsed geography). This 'bad look' might not matter much if EA isn't trying to grow, but it does seem to conflict with priority (no. 3 in this list) of building EA as a movement: like how do you get a broad, large, diverse group of people to care about EA while essentially telling some (substantial?) percentage of them (say in India, parts of China or South America) that the floods/crop failures/etc. happening in their countries are relatively less important. Especially if some of those students/people come from less well-off families, so aren't insulated from the social, economic tensions that result. Either you will get a) adherents who have certain moral views (which might of course be consistent with extreme utilitarianism), or b) will skew movement growth towards places/people that are less exposed to climate change or wealthy enough to deal with it. Again, it might not matter very much and be fully justified from a theoretical perspective, but it feels a bit weird in the court of public opinion (which unfortunately is where we live, and where policy actions are partially determined).