Effective Altruism came up in a recent discussion between Patrick Collison and Jason Crawford on the topic of Progress Studies. This is a transcript of the relevant passage. Edited for clarity.

Jason Crawford: So compare and contrast the effective altruism movement with progress studies, and I would love to hear you comment on the notion of existential risk or global catastrophic risk.

Patrick Collison: On EA... Well… with anything that’s a kind of a totalizing framework and I don't mean that pejoratively, I just mean that in principle everyone could have an EA mindset and I'm sure at least some EA members think that everyone should have an EA oriented mindset. And so you can then sort of ask the question of well, would it be good for everyone to have an EA mindset? But of course the other way of asking it is, is EA on the current margin a good new way for people to be thinking and would it be good, if instead of zero people before the EA movement to be thinking in an EA oriented fashion, would it be good for 5% of people or for 10% of people to be thinking that way? And in the latter sense, was it a good shift on the margin? I think EA has been great and I'm sort-of delighted that they have had the progress they they've had.

Now if the question is, should everyone be an EA or even, I guess in the individual sense, am I or do I think I should be an EA? I think – and obviously there's kind of heterogeneity within the field – but my general sense is that the EA movement is always very focused on kind of rigid, not rigid, that's that's unfair perhaps, but on sort of, estimation, analytical, quantification, and sort of utilitarian calculation, and I think that that as a practical matter that means that you end up too focused on that which you can measure, which again means – or as a practical matter means – you're too focus on things that are sort of short-term like bed nets or deworming or whatever being obvious examples. And are those good causes? I would say almost definitely yes, obviously. Now we've seen some new data over the last couple of years that maybe they’re not as good as they initially seemed but they're very likely to be really good things to do.

But it's hard for me to see how, you know, writing a treatise of human nature would score really highly in an EA oriented framework. As assessed ex-post that looked like a really valuable thing for Hume to do. And similarly, as we have a look at the things that in hindsight seem like very good things to have happen in the world, it's often unclear to me how an EA oriented intuition might have caused somebody to do so. And so I guess I think of EA as sort of like a metal detector, and they've invented a new kind of metal detector that's really good at detecting some metals that other detectors are not very good at detecting. But I actually think we need some diversity in the different metallic substances which our detectors are attuned to, and for me EA would not be the only one.

Jason Crawford: Good metaphor. And what about the particular concerns about global catastrophic risk or existential risks especially from technology. This is where I get sort of the most, I won't even say push back exactly, but concern from people about progress studies, is the question of, if we just run full throttle with progress what about the risk of, we're just not careful enough and we get some catastrophe.

Patrick Collison: Yeah, I think it is probably true that the optimal rate of technological change is not monotonically better the more that there is. There probably are kind of shear forces with society at a certain point. I think the question is to, as a practical matter should our concern be having too much or having too little. And actually, I should say I don't want to conflate ‘progress’ in progress studies with purely technological advancement, but it’s still a significant part of it. And generally I think, looking historically I think that too little has been a problem far more frequently than too much. I think that today in many super obvious ways we have... too many lives are not as good as they obviously could be, and so in a very tangible sense we have too little progress, and I think there are very valid ways in which one could imagine having too much.

But I guess, I feel sort of, for every one unit of concern I give to ‘too much’ I give it four or five to ‘we have too little.’ Now there is an asymmetry there where the existential risks are, by definition, existential and I thought Toby Ord's book was a great contribution. Broadly I think the existential risks folks, again, have introduced a good line of thinking that we really should be taking seriously. I suspect that it's possible to mitigate most of those risks relatively effectively without redirecting vast swathes of society and I think the more difficult problem will actually be how do we generate enough.

Comments4
Sorted by Click to highlight new comments since: Today at 4:18 AM

Fun post. Thanks for adding it and to Patrick and Jason.

And similarly, as we have a look at the things that in hindsight seem like very good things to have happen in the world, it's often unclear to me how an EA oriented intuition might have caused somebody to do so

I think this point is a good one but it doesn't hold up. This post has 40 upvotes and no negative comments. Seemlingly everone agrees that it's good for people to follow non-standard paths. This is literally how "an EA oriented intuition might have caused somebody to do so".

Does someone want to send Patrick his membership details?

Yeah, it does sound like he might be open to fund EA causes at some point in the future.

I do think though that it is still a good criticism. There is a risk that people who would otherwise pursue some weird idiosyncratic, yet impactful, projects might be discouraged by the fact that it might be hard to justify it from a simple EA framework. I think that one potential downside risk of 80k's work for example is that some people might end up being less impactful because they choose the "safe" EA path rather than a more unusual, risky, and, from the EA community's perspective, low status path.

Let's model it. Currently it seems a very vague risk. If it's a significant risk, it seems worth considering in a way that we could find out if we were wrong.

I'd also say things like:

  • EAs do a lot of projects, many of which are outlandish or not obviously impactful, how does this compare to the counterfactual?
But it's hard for me to see how, you know, writing a treatise of human nature would score really highly in an EA oriented framework. As assessed ex-post that looked like a really valuable thing for Hume to do.

Actually, there's a lot of EAs researching philosophy and human psychology.

I think Collison's conception of EA is something like "GiveWell charity recommendations" - this seems to be a common misunderstanding shared by most non-EA people. I didn't check the whole interview, but it seems weird that he doesn't account for the contrast between what he had just said about EA and his comments on x-risks and longtermism.

Curated and popular this week
Relevant opportunities