Hide table of contents

Summary: In this post I analyze why I believe the EA community culture is important, and the things I like and believe should be taken care of.

The Effective Altruism culture.

I feel very lucky to be part of a group of people whose objective is to do the most good. I really want us to succeed, because there is so much good to be done yet, and so many ways humankind can achieve extraordinary feats. And being part of this endeavor really gives purpose and something I am willing to fight for.

But Effective Altruism is not only the `doing the most good' idea but also a movement. It is a very young movement indeed: according to Wikipedia, in 2009 Giving what we can was founded by Toby Ord and Will MacAskill. In 2011 they created 80000hours.org and started using the name Effective Altruism, and in 2013, less than 10 years ago, the first EA Global took place. Since then, we have done many things together, and I am sure we will achieve many more things too. For that, I believe the most important aspect of our movement is not how rich we get, or how many people we place at key institutions. Rather, it is the culture we settle, and for this reason, I think it is everyone's job in the community to make sure that we keep being curious, truth-seeking, committed and welcoming. Keeping this culture is essential to being able to change our minds and adapt about how to do the most good, and also convincing society as a whole about the things we care about.

In this post I will discuss the things I like about us, and also the things we have to pay special attention to.

The things I like about our culture

Some of the things I like about our community are thanks to the rationalistic community from the Bay Area. The focus on truth-seeking and having good epistemics about how to do the most good are very powerful tools. Other things I like about the community that are also inherited from the Bay Area, I believe, are the risk-taking and entrepreneurial spirit.

Beyond the previous, our willingness to consider unconventional but well-grounded stances, the radical empathy to care about those who have no voice (the poorest of the people, animals, or future generations), or the cause impartiality principle are due to the utilitarian roots of Toby Ord and William MacAskill.

Finally, Effective Altruism has more or less successfully avoided becoming a political ideology, which I believe would be risky.

Aspects where we should be careful

However, not all aspects of our culture are great, even if generalization is perhaps not appropriate. Instead, I will flag those I believe could become a problem. With this, I hope that the community will pay attention to these issues and keep them in check.

The first one is money. While it is a blessing that so many rich people agree that doing good is great and are willing to donate it, a recent highly upvoted post warned about the perception and epistemic problems that may arise with that. The bottom line is that having generous money may be perceived as self-serving, and may degrade our moral judgment, but you can read more in the post itself.

A perhaps more important problem is some elitism in the community. While it makes sense that wanting to do good means talking first to students of good universities, we have to be very careful about not being dismissive of people from different backgrounds who nevertheless are excited by our same goals. This may be particularly damaging in particular subareas such as AI Safety, where there is sometimes a meme that all we need is really really smart people. And it is not even true, what we need good researchers, engineers... Similarly applies to students in elite universities: let us not be dismissive of people just because they did not go to cool colleges.

Somewhat related is social status in the community. While it is clear that not every opinion in the community should have the same weight, we have to fight against too much deference on what is best to do. I sometimes fear we give the same answers to every person asking for advice, without understanding that their situation is different. I am sure this is not the case with 80000hours and they put real effort into personalized advice, but I am worried about the "just go and do AI Safety" quick advice that I have sometimes encountered. Social status might also be a problem for people aiming to fund their charities in poverty alleviation: since so many people defer to GiveWell and Malaria is so hard to beat, heroic people aiming to fund new charities in different high impact areas might be discouraged to do so.

There is also a risk of strong social in-group vs out-group dynamics. In order to make people happy to become EAs, we need to adapt our speech to them. It is false that we can just throw a bunch of compressed rational arguments to smart people and expect them to instantly recognize that longtermism or existential risks make sense. Even doing things that society seems awkwardly such as caring about animals is costly, so it is important to be patient and let them know that many of us at some point were in their position and struggled too. I starkly remember in the first EA dinner I attended how I said I cared about climate change but not so much animal welfare, even if it made total sense that it is bad when animals suffer. In the following months, however, I went vegetarian, and the reason is that having uncommon beliefs takes time, and perhaps a group of like-minded people to support you.

And finally: are we too dismissive of standard ways of solving problems? A couple of examples: It recently surprised me that GiveWell started recommending water purification interventions as highly effective. Since water quality interventions are widely known and were not previously recommended, I think most EAs would have believed that this was not really impactful, and just flag it as ineffective. Similarly, a few people I have talked to are also somewhat dismissive of academia as a place to solve AI Alignment because incentives are bad, but academia has one of the best feedback mechanisms for doing research. For this reason, I believe that until we have figured out a better way to measure quality and progress in AI Safety research, it is a premature attitude.

Please note that I don't think these are general problems the community already has, but rather issues that may become important problems.

In summary, I believe the culture we foster in the community will be very important to preserve our potential to do good, and we have to make sure we remain a friendly, open, and truth-seeking community.

35

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since: Today at 4:57 PM

I do suspect there is a lot of interaction happening between social status, deference, elitism and what I'm starting to feel is more of a mental health epidemic then mental health deficit within the EA community.  I suspect it's good to talk about these together, as things going hand in hand.

What do I mean by this interaction?

Things I often hear, which exemplify it:

  • younger EAs, fresh out of uni following particular career advice from a person / org, investing a lot of faith in it - probably moreso than the person of higher status expects them to. Their path doesn't go quite right, they get very burned out and disillusioned
  • people not coming to EA events anymore because, while they want to talk about the ideas and feel inspired to donate, the imposter syndrome becomes too big when they get asked "what do you do for work?"
  • talented people not going for jobs / knocking themselves down because "I'm not as smart as X" or "I don't have 'elite university' credentials", which is a big downer for them and reinforces the whole deference to those with said status, particularly because they're more likely to bei n EA positions of power
    • this is a particularly pernicious one, because ostensibly smarter / more experienced people do exist, and it's hard to tell who is smarter / more experienced without looking to signals of it, and we value truth within the community...but these are also not always the most accurate signals, and moreover the response to the signal (i.e. "I feel less smart than that person")is in fact an input into someone's  ability to perform   

Call me a charlatan without my objective data, but speaking to group organisers this seems way more pervasive than I previously realised... Would welcome more group organisers / large orgs like CEA surveying this again, building on the 2018/19 work... hence why am I using strong language than might seem almost alarmist language

EDIT: formatting was a mess

I would not be as strong. My personal experience is a bit of a mixed bag: the vast majority of people I have talked to are caring and friendly, but I (rarely) keep sometimes having moments that feel a bit disrespectful. And really, this is the kind of thing that would push new people outside the movement.

This is good and I want to see explicit discussion of it. One framing that I think might be helpful:

It seems like the cause of a lot of the recent "identity crisis" in EA is that we're violating good heuristics. It seems like if you're trying to do the most good, really a lot of the time that means you should be very frugal, and inclusive, and beware the in-group, and stuff like that.

However, it seems like we might live in a really unusual world. If we are in fact massively talent constrained, and the majority of impact comes from really high-powered talent and "EA celebrities", then maybe we are just in one of the worlds where these heuristics lead us astray, despite being good overall.

Ultimately, I think it comes down to: "if we live in a world where inclusiveness leads to the highest impact, I want EA to be inclusive. If we live in a world where elitism leads to the highest impact, I want EA to be elitist". That feels really uncomfortable to say, which I think is good, but we should be able to overcome discomfort IF we need to.

Hey James!

I think there are degrees, like everywhere: we can use our community-building efforts in more elite universities, without rejecting or being dismissive of people from the community on the basis of potential impact.

Yes, 100% agree. I'm just personally somewhat nervous about community building strategy and the future of EA, so I want to be very careful. I tried to be neutral in my comment because I really don't know how inclusive/exclusive we should be, but I think I might have accidentally framed it in a way that reads implicitly leaning exclusive, probably because I read the original post as implicitly leaning inclusive.

Curated and popular this week
Relevant opportunities