This just came out in Current Affairs Magazine.  It is a polemic, pretty hack-y, written from a bias in favour of socialism (as a better way of effecting change - at least for currently-alive humans).  It has the usual out-of-context quotes of Ord, MacAskill, Bostrom, and cites Phil Torres and Timnit Gebru.  One for the files/conversation on how to deal with external criticism. 

But it had a couple of more substantive points:

  1. the dismissal of AGI x-risk is unhelpful but not surprising and there is probably little overlap between the alignment crowd and this magazine's readers (I got it through the FT's Alphaville blog which is really good) so I doubt it's actively harmful.  I think the efforts to push back and make the case are good (though that isn't a consensus, see this post and comments).  FWIW I tried to write my reasons for disagreeing with another alignment-skeptical tech commentator.
  2. the Erik Hoel essay is worth a read as it more rigorously examines EA as a philosophy while essentially agreeing with certain recommendations on how to behave in (Hoel's) life.  See also this EAF post though there aren't many comments there atm.
  3. much of the factually-verifiable or changeable criticism of EA/longtermism/AI/etc revolves around the 'white male' critique.  It would be great to have a set of statistics assessing this, if indeed EAs think it is actually an issue.  For instance, I just did the AGI Safety Fundamentals course on both technical and governance tracks, and thought my cohorts were pretty diverse (one leader was non-white male, the other white female, and non-white participant % in the technical cohort was 50% and in the governance cohort 20%).  In the alignment world, female thought-leaders seem well-represented (Ajeya Cotra, Katja Grace, Vanessa Kosoy, Beth Barnes off the top of my head)
  4. related to (3), I think the 'white male' thing (presumably a hangover of Ord, Bostrom, MacAskill, Russell, Tegmark, and Christian having written all the highest-profile works so far) might ease with time and a little effort - for instance, going around to magnet (pre-undergrad) schools with high POC representation in (say) SF, NY, London, Paris, etc., pitching AI x-risk (for example, as something students might find more obviously interesting and less abstract/contentious than longtermism/EA...engineered pandemics is another possibility).  Obviously an earlier step is to develop a 'curriculum' or just an accessible talk that is politically acceptable in an educational environment, and groundwork with Ofsted or equivalent (US is more difficult as regulation is devolved to state/local level so probably fewer economies of scale).
  5. the ranking of climate change as a second-order problem is understandable (based upon my reading of Ord, MacAskill, or this post) but it isn't a good look given the general public's concern (which is obviously amplified for countries with relatively low income or developmental status, or simply in more expopsed geography).  This 'bad look' might not matter much if EA isn't trying to grow, but it does seem to conflict with priority (no. 3 in this list) of building EA as a movement: like how do you get a broad, large, diverse group of people to care about EA while essentially telling some (substantial?) percentage of them (say in India, parts of China or South America) that the floods/crop failures/etc. happening in their countries are relatively less important.  Especially if some of those students/people come from less well-off families, so aren't insulated from the social, economic tensions that result.  Either you will get a) adherents who have certain moral views (which might of course be consistent with extreme utilitarianism), or b) will skew movement growth towards places/people that are less exposed to climate change or wealthy enough to deal with it.  Again, it might not matter very much and be fully justified from a theoretical perspective, but it feels a bit weird in the court of public opinion (which unfortunately is where we live, and where policy actions are partially determined). 

12

6 comments, sorted by Click to highlight new comments since: Today at 6:05 AM
New Comment

(Note that this comment is quick and not super well thought out. I hope to research and think about it more deeply at some point in the future, and maybe write it up in a better form). 

As with many articles critical of EA, this article spends a while arguing against the early EA focus on earning to give:

To that end, I heard an EA-sympathetic graduate student explaining to a law student that she shouldn’t be a public defender, because it would be morally more beneficial for her to work at a large corporate law firm and donate most of her salary to an anti-malaria charity. The argument he made was that if she didn’t become a public defender, someone else would fill the post, but if she didn’t take the position as a Wall Street lawyer, the person who did take it probably wouldn’t donate their income to charity, thus by taking the public defender job instead of the Wall Street job she was essentially murdering the people whose lives she could have saved by donating a Wall Street income to charity.1

...

MacAskill wrote a moral philosophy paper arguing that even if we “suppose that the typical petrochemical company harms others by adding to the overall production of CO2 and thereby speeding up anthropogenic climate change” (a thing we do not need to “suppose”), if working for one would be “more lucrative” than any other career, “thereby enabling [a person] to donate more” then “the fact that she would be working for a company that harms others through producing CO2” wouldn’t be “a reason against her pursuing that career” since it “only makes others worse off if more CO2 is produced as a result of her working in that job than as a result of her replacement working in that job.” (You can of course see here the basic outlines of an EA argument in favor of becoming a concentration camp guard, if doing so was lucrative and someone else would take the job if you didn’t. But MacAskill says that concentration camp guards are “reprehensible” while it is merely “morally controversial” to take jobs like working for the fossil fuel industry, the arms industry, or making money “speculating on wheat, thereby increasing price volatility and disrupting the livelihoods of the global poor.” It remains unclear how one draws the line between “reprehensibly” causing other people’s deaths and merely “controversially” causing them.)4 

It's a little frustrating to me that EA orgs and public figures have basically conceded this argument and tend to shy away from actively defending earning to give as a standard EA path. I think the utilitarian argument that the quoted graduate student was making is basically correct (with the need to properly account for one's career decision marginally impacting salaries in your given field, and whether one is likely to be a more effective worker than the person one is displacing).  On the flip side, I think the deontological argument that NJR is making doesn't really hold up that well under scrutiny? Current Affairs is a print magazine, printing and mailing thousands of copies of it every month contributes to resource usage and climate change. NJR presumably is okay with this because he thinks that the benefits of educating and informing his readership exceed the harms of his resource usage. In the same way, I think working in a job that produces some negative harms can be okay if the net benefits of donating one's income substantially outweigh those harms. I think this gets even more stark when you try and actually think through the human scale of it all. Imagine having to tell ten thousand parents that the reason their kids won't get anti-malaria pills this year is that you working as a stock trader violates the categorical imperative. It sounds absurd, but that's the kind of thing we're talking about here. 

Something that I do think I and NJR would agree on is that it's really screwed up that the world is in this situation to start with. There's something deeply unjust about a random American lawyer getting to decide whether people die from malaria based on their career and donation decisions. But we can't wave a magic wand and change that at the drop of a hat. And choosing to focus only on efforts to create systemic change means not getting lifesaving medicine to a ton of people who need it right now. I wish critics engaged more deeply with those really hard tradeoffs, and that EAs did a better job of articulating them. Just trying to sidestep the conversation about earning to give really undersells the moral challenge and stakes we're dealing with.

One thing that's  sad and perhaps not obvious to people is that, as I understand it, Nathan Robinson was initially sympathetic to EA (and this played a role in at-times vocal advocacy for animals). I don't know that there's much to be done about this. I think the course of events was perhaps inevitable, but that's relevant context for other Forum readers who see this.

The discussion on Erik Hoel's piece is here:

https://forum.effectivealtruism.org/posts/PZ6pEaNkzAg62ze69/ea-criticism-contest-why-i-am-not-an-effective-altruist

This is sort of unavoidably going to get into culture war/political territory... but if Nathan J Robinson is the kind of public enemy that EA has, we're doing very well.  NJR is a terrible hack who has little of substance to say and whose place in The Discourse is usually as a laughingstock, even among those who are ideologically close to him.

 

I'm of the opinion that EAs should pick more public fights anyways, so I see this is as a positive development.

also - the clever sounding title was taken from an obscure academic screed 9 years ago - https://ssir.org/articles/entry/the_elitist_philanthropy_of_so_called_effective_altruism

This is nitpicky, but I wouldn't call that "an obscure academic screed":

  • It was written by Charity Navigator leadership, who presumably felt threatened or something by GiveWell. So I think it was more like a non-profit turf war thing than an academic debate.
  • I wasn't around at the time, but I have the impression that it was pretty (in)famous in EA circles at the time. Among other things it prompted a response by Will MacAskill. So it also feels wrong to call it obscure.