With all the recent press around Will Macaskill and What We Owe the Future, in addition to the winter 2022 press buzz for Sam Bankman-Fried and the FTX Future fund, it really seems like we are entering a new chapter in the Effective Altruism movement. 

As someone loosely involved with the community for over a decade, things certainly seem quite different than they did in the past. Ofcourse, the changes happened gradually but aside from the increase in funding and popularity, EA itself, including its norms, values and messaging seem vastly different. 

I made the following chart to highlight some of the differences as I perceive them.

Issue:EA 1.0 (2011-2021)EA 2.0 (2022- )
Primary cause area by public attention:Global Health & PovertyExistential Risk
promoted cause of choice:Anti malaria mosquito netsArtificial Intelligence
Theoretical/ empirical support:Limited to what can be empirically demonstrated with a high degree of confidenceFocus on theoretical speculative arguments
Focus of concern:Mostly people alive todayMosty future people
Accessibility and palatability of core ideas:Not intuitive but understandable and seen as praiseworthyNot intuitive, hard to understand and seen as problematic
Connection with politics:Non-politicalTrying to influence politics
Perceived level of power and influence:MinimalHigh
Relationship with money:Money is scarce and frugality is expectedMoney is abundant and lavish spending is accepted
Most highly involved EAs work:For normal businessesFor EA organizations
Career impact of working for an EA org:Requires sacrifice of remuneration and career capitalEqual or greater compensation and career capital than working at a non-EA org
Most highly involved EAs socialize:With non-EAsWith other EAs

This chart is intended to be neutral but obviously these changes are not neutral. With so much evolving, lots of people are going to be thrilled with the direction of EA while others are going to be frustrated.

If we are going to accept the categorization of the changes highlighted in the chart as true and significant, what do people in this community think of them…

Are you happy with them? frustrated by them? 

Will they help EA gain more popular support? Will they help EA have more impact?

EDIT: I found these 2021 comments from Ben Todd and Rob Bensinger to be helpful in understanding some of the context to these shifts.



 

28

0
0

Reactions

0
0
Comments21
Sorted by Click to highlight new comments since: Today at 9:26 AM

I think that this is a far too binary way of representing EA.

EA has been a “big tent” and a complex ecosystem for a long time – and that is good!

However, I am disappointed with it being framed as a shift, or as a binary, or with any particular feature of a part of EA being seen “as EA”.

  • “EA” isn’t talent constrained, but many causes and projects within EA are (so that on net the community is talent constrained).
  • "EA" isn't flush with talent, but many roles within EA can be very competitive (so that on net it'd often be worth applying for any suitable role, but also most people won't get roles after applying for a while).
  • “EA” isn’t funding constrained, but many causes and projects within EA are (so that on net the community is funding constrained).
  • “EA” isn’t flush with cash, but some projects have easier access to funding than they used to (e.g. GiveWell’s research team, top AI safety researchers, some new high-EV projects within longtermism who have strong founding teams and a solid idea).
  • “EA” isn’t longtermist, but many people, causes and projects within EA are (so that longtermism makes up a ~plurality of job opportunities, a ~plurality of new projects, and a significant segment of funding).
  • "EA" isn't neartermist, but many people, causes and projects within EA are (so that neartermism makes up a ~majority of current funding, ~majority of broad base EAs & individual donors to EA causes).
  • “EA” isn’t vegan, but many people, causes and projects within EA take animal lives into serious consideration (such that a norm within EA has been to default to catering vegan).
  • “EA” isn’t political, but many people, causes and projects take the impact of politics seriously (such that most people consider their politics as a meaningful consideration for their impact on the world).

Et cetera…

The balance and distributions of these features moves within the community over time, but “EA” is the project of trying to figure out how to do more good and to act on what we find.

What's your evidence for EA being big tent? Has there been a survey done of new EA members and their perception? Focus groups? Other qualitative research? Curious for the basis of your claims. Thanks much! 

Almost everything you can imagine (other than organisations who've a very specific focus):

  • Any given day top posts on the EA Forum range across various worldviews and cause areas
  • Any given day the amount of funding going through GWWC  ranges across various worldviews and cause areas
  • Answers to cause priorities on the EA Survey 
  • A quick straw poll of any broad EA event
  • Topics covered in EA groups and fellowships

The contrary is much harder to prove.

To my eyes, you and the post-writer don't really disagree but prefer different levels of descriptive precision. So instead of saying 'EA is X', you would prefer saying 'many people in EA are X'. After the precise sharpening, this seems to capture pretty much still the same idea and sentiment about where EA is going as highlighted in the post.

And to respond to the post by saying 'this is not precise enough' rather than 'this is the wrong trend' seems to miss the point here. Of course, using tables and simple before-afters is not a format for precise descriptions. However, the author still uses it, perhaps not out of carelessness but because this format is good for highlighting trends in an easily understandable way. To my eyes, the post is supposed to highlight overall trends rather than a precise description of the community. Semantic precision is always prudent, but the main gist of the idea highlighted in the post seems to remain after semantic sharpening. 

So if the response here is essentially 'yes people in EA are moving to the direction DMMF says it is, but just don't say EA is', I'd say the post still basically stands. 

Largely yes. That's why  I said I'm disappointed with this framing (not just in this post but in other contexts where this is happening).

As an outsider who has followed the EA movement for some time, I find the 180-degree shift from pragmatic, empirically demonstrated philanthropy to speculative, theoretical arguments baffling, and I am surprised that more people have not questioned how the "pivot to longtermism" has come at the expense of the empirical foundation for EA. I wonder whether the EA 2.0 will last or whether it'd be better for the movement to break into two separate initiatives. The EA movement risks losing its broad appeal and accessiblty.

I'm very new to the EA movement, but I wonder how much EA has actually shifted from "neartermism" to longtermism, instead of having always been about both.

I see comments from 10 years ago saying things like

80k members give to a variety of causes. When we surveyed, 34% were intending to give to x-risk, and it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area. [...]

If I save the life of someone in the developing world, almost all the benefit I produce is through compounding effects: I speed up technological progress by a tiny margin, giving us a little bit more time at the end of civilisation, when there are far more people. This benefit dwarfs the benefit to the individual whose life I've saved (as Bostrom argues in the first half of Astronomical Waste). Now, I also increase the amount of animal suffering, because the person whose life I've saved consumes meat, and I speed up development of the country, which means that the country starts factory farming sooner. However, we should expect (or, at least, I expect) factory farming to disappear within the next few centuries, as cheaper and tastier meat substitutes are developed. So the increase in animal suffering doesn't compound in the same way: whereas the benefits of saving a life continue until the humanity race (or its descendants) dies out, the harm of increasing meat consumption ends only after a few centuries (when we move beyond farming).

So let's say the benefit to the person from having their live saved is N. The magnitude of the harm from increasing factory farming might be a bit more than N: maybe -10N. But the benefit from speeding up technological progress is vastly greater than that: 1000N, or something. So it's still a good thing to save someone's life in the developing world. (Though of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfed by existential risk mitigation).

Of course, it's just a random comment, and the actual views of the community were certainly more complex. But it doesn't seem different from the current views?
Or are you referring to the very early days of EA? (2009 and not 2012)
Or to the fact that now more than ~34% of people in EA are focused on x-risk?
Or did EA use to present itself to outsiders as being about neartermisms, while keeping the longermist stuff more internal? 


In practice, it seems that Global Health and Development still gets the most funding, at least from OpenPhil and GWWC. Do you think the balance has shifted mostly in terms of narrative or in terms of actions?

Disclaimer: as mentioned I'm relatively new, I have not read Doing Good Better or What We Owe the Future.

I first became closely involved/interested in EA in 2013, and I think the "change to longtermism" is overstated. 

Longtermism isn't new. As a newbie, I learned about global catastrophic risk as a core, obvious part of the movement (through posts like this one) and read this book on GCRs — which was frequently recommended at the time, and published more than a decade before The Precipice. 

And near-term work hasn't gone away. Nearly every organization in the global health space that was popular with EA early on still exists, and I'd guess they almost all have more funding than they did in 2013. (And of course, lots of new orgs have popped up.) I know less about animal welfare, but I'd guess that there is much more funding in that space than there was in 2013. EA's rising tide has lifted a lot of boats.

Put another way: If you want to do impact-focused work on global health or animal welfare, I think it's easier to do so now than it was in 2013. The idea that EA has turned its back on these areas just doesn't track for me.

I think a focus on absolute values seems misleading here. You're totally right the absolute value of funding has gone up for all cause areas (see this spreadsheet). However, there's also a pretty clear trend that the relative funding towards global health and animal welfare has gone done quite a lot, from global health going from approximately all of EA funds in 2012 to 54% to 2022. Similarly for Animal Welfare, which seemed to peak at 16% in 2019, might only be 5% in 2022.

I think this relative shift of attention/priorities is what people are often referring to when they say "EA has changed" etc. 

I agree. It seems obvious that effective altruism has changed in important ways. Yes, some characterisations of this change are exaggerated, but to deny that there's been a change altogether doesn't seem right to me.

It may be more about how much of the conversation space is taken up with different topics rather than the funding amounts (relative or absolute).

I think even if there had been a larger animal funder keepings the percentages the same, but no change in topics, people will still sense a shift.

Yeah I agree with that - I guess I was using funding amounts as a proxy for general EA attention which includes stuff like EA Forum posts, orgs working on an issues, focus of EA intro materials, etc etc.

That's an amazing spreadsheet you linked there! Did you collect the data yourself?

I wish, it's so interesting! I found it linked (very surprisingly) in the Time article about Will/longtermism. From this quote below:

The expansion has been fueled by a substantial rise in donations. In 2021, EA-aligned foundations distributed more than $600 million in publicly listed grants—roughly quadruple what they gave five years earlier.

https://www.givewell.org/about/impact GiveWell apparently has different (higher) numbers

Aaron, I agree that these global health issues are getting the serious attention they need now, and I don't think that EA has turned its back on these issues. 

Rather, it's the narrative about EA that feels like it's shifting. The New Yorker's piece describes the era of "bed nets" as over, and while that's not a true statement - when you looking at the funding - the attention placed on longtermism shifts the EA brand in a big way. One of EA's strengths was that anyone could do it - "here's how you can save one life today." The practical, immediate impact of EA is appealing to a lot of young people who want to give back and help others. With longtermism, the ability to be an effective altruist is largely limited to those with advanced knowledge and technical skills to engage in these highly complex problems. 

As press and attention is drawn to this work, it may come to define the EA movement, and, over time, EA may become less accessible to people who would have been drawn to its origin mission. As an outsider to EA, who serves as a PM who builds AI models, I'm not able to assess which AI alignment charities are the most effective charities, and that saps confidence that my donation will be effective.  

Again, this is purely a branding/marketing problem, but it still could be an existential risk for the movement. You could imagine a world where these two initiatives could build their own brands - longtermism could become 22nd Century Philanthropy (22C!), and people who are committed to this cause could help to build this movement. At the same time, there are millions of people who want to funnel billions of dollars to empirically-validated charities that make the world immediately better, and the EA brand serves as a clearly-defined entry point into doing that work.  

Over EA's history, the movement has always had a porous quality of inviting outsiders and enabling them to rapidly become part of the community by enabling people to concretely understand and evaluate philanthropic endeavors, but in a shift to difficult to understand and abstract longermism issues, EA may lose that quality that drives its growth. In short, the EA movement could be defined as making the greatest impact on the greatest number of people today, and 22nd Century Philanthropy could exist as a movement for impacting the greatest number of people tomorrow, with both movements able to attract people passionate about these different causes. 

Shouldn't it be possible to do a simple chart of ballpark funding to longtermist versus neartermist causes over time from EA aligned orgs? 

These types of questions deserve more focused dialogue and debate. 

aL
2y1
0
0

(disclaimer: I'm someone who's been in EA for ~3 years that digs for EA material for the general public, not someone with tons of lived experience in EA). 

For palatability, there are still cases where early EA ideas such as charity evaluation didn't take well. Public perception didn't seem to ever be straightforward praise (contrast w/ an org like Partners in Health) and confusion seemed to be pretty abundant (e.g., common criticisms of "EA" where just toward earning to give). Paraphrasing MacFarquhar in Strangers Drowning, optimization and "cold-blooded" charity seem to draw suspicion, and it's not straightforward to say that EA has become less publicly palatable.

This table seems to more fit a broad categorization of two clusters in EA that are becoming more distinct rather than a time trend. As a college group organizer, the increased sorting of people more focused on empirical evidence + direct intervention and people more focused on theoretical projections + inference used to worry me more, though now I think it's probably better for group dynamics, organization, and overall impact. It hasn't fragmented the EA campus community (yet) and is more for specialization - there's still sharing of tools, pedagogy, and other helpful resources. (This diffusion may not be reflected in EA careers/EA orgs). It's still possible these two vague factions fully split, possibly leaving one side with the lion's share of funding and influence, but I don't see either fully disappearing.

In some sense, I'm both happy and frustrated about the change.

I'm happy that EA recognized the importance of longtermism that is robust to distributional shift, and also recognized the importance of AI and AI Alignment. It takes some boldness to accept weird causes like this. In some sense, EA invented practical longtermism that is robust to distributional shift for people, which is equivalent to inventing longtermism for people.

I also worry about politicization of EA, even as I grudgingly admit the fully non-political era is over by default, and EA needs to gracefully recognize the new reality.

I think that theoretical arguments' reliability can depend. Most will have problems when translated to the real world, but something like DALYs/QALYs is likely to work in real life. A similar point for AI is the Risks from learned optimization sequence here: https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB

https://www.nature.com/ar

I'm also frustrated at how little EA has in the Entrepreneur space, and too much arguing over the best thing before we do it. To put it another way, EA needs more executors, and 80,000 Hours should start prioritizing start-up people hiring. At the end of the day, EA needs to be able to actually do stuff and get results, not just become a intellectual hothouse.

EDIT: I no longer endorse growing the movement in people because I now think that Eternal September issues where a flood of new EAs permanently change the culture are a real risk, and there aren't a lot of scalable opportunities right now.

Thanks for the post. Is there a comprehensive repository of EA involvement in politics? Thinking of something similar to Open Philanthropy's grant database.

More from DMMF
Curated and popular this week
Relevant opportunities