One downside of engaging with the EA community is that social status in the community probably isn't well aligned with impact, so if you consciously or subconsciously start optimising for status, you may be less impactful than you could be otherwise. 

For example, roles outside EA organisations which lead to huge social impact probably won't help much with social status inside the EA community.

52

0
0

Reactions

0
0
Comments8
Sorted by Click to highlight new comments since:

I copied this related facebook comment by Kerry Vaughan from 6th September 2018 (from this public thread):

> (This post represents my views and not necessarily the views of everyone at CEA) [for whom Kerry worked at the time]

> [...] I think there are some biases in how the community allocates social status which incentivize people to do things that aren’t their comparative advantage.

> If you want to be cool in EA there are a few things you can do: (1) make sure you’re up to date on whatever the current EA consensus is on relevant topics; (2) work on whatever is the Hot New Thing in EA; and (3) have skills in some philosophical or technical area. Because most people care a lot about social acceptance, people will tend to do the things that are socially incentivized.

> This can cause too many EAs to try to become the shape necessary to work on AI-Safety or clean meat or biosecurity even if that’s not their comparative advantage. In the past these dynamics caused people to make themselves fit the shape of earning to give, research, and movement building (or feeling useless because they couldn’t). In the future, it will probably be something else entirely. And this isn’t just something people are doing on their own - at times it’s been actively encouraged by official EA advice.

> The problem is that following the social incentives in EA sometimes encourages people to have less impact instead of more. Following social incentives (1) disincentivizes people from actually evaluating the ideas for themselves and discourages healthy skepticism about whatever the intellectual consensus happens to be. (2) means that EAs are consistently trying to go into poorly-understood, ill-defined areas with poor feedback loops instead of working in established areas where we know how to generate impact or where they have a comparative advantage. (3) means that we tend to value people who do research more than people who do other types of work (e.g. operations, ETG).

> My view is that we should be praising people who’ve thought hard about the relevant issues and happen to have come to different conclusions than other people in EA. We should be praising people who know themselves, know what their skills are, know what they’re motivated to do, and are working on projects that they’re well-suited for. We should be praising people who run events, work a job and donate, or do accounting for an EA org, as well as people who think about abstract philosophy or computer science.

> CEA and others have taken some steps to help address this problem. Last year’s EA Global theme -- Doing Good Together -- was designed to highlight the ideas of comparative advantage, of seeing our individual work in the context of the larger movement and of not becoming a community of 1,000 shitty AI Safety researchers. We worked with 80K to communicate the importance of operations management (https://80000hours.org/articles/operations-management/) and CEA ran a retreat specifically for people interested in ops. We also supported the EA Summit because we felt that it was aiming to address some of these issues.

> Yet, there’s more work to be done. If we want to have a major impact on any cause we need to deploy the resources we have as effectively as possible. That means helping people in the community actually figure out their comparative advantage instead of distorting themselves to fit the Hot New Thing. It also means praising people who have found their comparative advantage whatever that happens to be.

Interesting - do you have any thoughts as to what status within the community is currently aligned? My recent thought was that we make a mistake by over-emphasizing impact (or success) when it comes to social status, rather than “trying your best on a high EV project regardless of outcome” for instance.

I’m going to replace impact with “expected impact” in my post since really I was thinking about expected impact. I agree that tangible outcomes are given more status than stuff that takes low-probability high-impact bets.

4 main other ways I can think of in which social status in EA isn’t perfectly aligned with expected impact, are:

  1. Working for / with EA orgs gets more social status than working outside EA orgs (I think this is the most significant misalignment)
  2. Longtermist stuff gets more social status than neartermist stuff
  3. Research roles get more social status than ops roles (except for the ops roles right at the top of organisations)
  4. philosophy gets more social status than technical research

I want to agree with you, but I feel like whenever I come up with an example of someone who is high prestige and fits >3 of your 4 criteria, I can think of someone equaly-ish high prestige who is maybe only fulfilling one or none of them. I've been wondering about how to study or prove these claims about prestige in the community in less subjective way (although I don't know how important it would be to actually do this)

Yeah I don’t think it seems important to be sure about how exactly social status might be misaligned with expected impact - I think we should assume that this kind of misalignment will exist by default because people are irrational, and as long as we recognise this we can mitigate the harmful effects by trying to avoid optimising for social status.

Counterpoint: my casual impression is that status-within-EA is actually quite strongly positively correlated with real-world-impact. The people who are publishing influential books, going on podcasts, publishing research, influencing policy, and changing public consciousness tend to get high status within EA. I can't really think of any EAs who are doing high-impact outreach who don't get commensurate status within EA.

So, I think the EA community is doing a pretty good job of solving the 'status alignment problem', aligning status-within-EA with real-world-impact.

But I guess one could make a distinction between 'real-world-impact' at the level of changing people's minds in the direction of EA insights and values, versus 'real-world-impact' at the level of reducing actual sentient suffering and promoting well-being. The latter might be quite a bit harder to quantify.

Social status in EA shouldn't have intrinsic value, but it sure can have instrumental value. It can facilitate access to EA's considerable connections, influence, and money.

The EA community as an impact multiplier of your ideas and projects is a potentially valuable asset, but largely depends on your social status within it.

The dynamics of social status will cause some problems in ways unique to the EA community, though my experience is that the same will be true for any organized group of people. I've never encountered any organized group that doesn't encounter the general problem of navigating those dynamics coming at the expense of making progress to achieving shared goals. This problem of internal problems may be universal due to human nature, though how much of an adverse impact it has can be managed in organizations with:

1. A standard set of principles and protocols for resolving those problems as they arise.
2. Practices that incentivize all participants to stick to those processes with fidelity and integrity.
3. An overall culture in the organization that reinforces those norms.

All of that is easier cohesive organizations with a singular mission, a formal structure, and an institutional framework providing templates for setting that all up. The most obvious example would be a company with management who all share the same goals in a well-regulated industry. Other than in private companies, my experience is that is most true of membership-based non-profit organizations with a well-designed constitution for establishing and enforcing bylaws among its membership. I haven't worked in the public sector, though I'm guessing some other effective altruists could speak to their experiences like that in some well-run government departments or agencies. 

It's much harder to set up such a system in coalitions with looser structures and multiple agendas that need to devise appropriate protocols and practices from whatever random position they're starting from. I'll use organized religions as an example to clarify the distinction between the two types of social organization. A single sect in a religion, like the Catholic or Presbyterian churches in Christianity, is more like a singular, formal institution. Meanwhile, the whole of Christianity as a religion is almost 2 billion people worldwide that it's practically impossible to subscribe a single way of life to based on only one interpretation of the Bible. 

Massive political parties, like the Democratic or Republican parties in the United States, are another obvious examples of organized institutions that lack a single, coherent culture or framework. Social movements are the other most common kind of less coherent coalitions like that. That includes effective altruism. 

It might be appealing to think that unnecessary tension and confrontation should be minimized by a more formal set of enforceable norms. Yet such iron-clad frameworks can threaten to undermine the purpose of any social movement. The stricter any set of rules imposed on any community, the more its culture is at risk of decaying into a draconian status quo. What's worse is the risk of that happening with the majority of the community members not realizing it before it's too late. At its most extreme, the internal enforcement of such regressive standards on a movement by its own leadership reduces its community to a cult. The most extreme enforcement of such standards on movements by an an external authority looks like a totalitarian state.

This is the framework I use to think more clearly about how to broach problems caused by status dynamics, though unfortunately I don't have any simple solutions to prescribe. The takeaways for effective altruists should be:

1. These problems are unavoidable and universal. There is no solution that can eliminate these problems or totally prevent them from arising. Any good solution looks like a robust framework for anticipating these problems and competently addressing them as they continually present themselves.

2. As tempting as it may be to institutionalize a rigid and static system to permanently minimize friction caused by status dynamics, that only works best for institutions that maintain the necessary functions of society as it currently exists. Social movements (including both political movements and intellectual movements) are meant to conduce changes in how society functions, so they'll only succeed if such a system is devised to be more fluid and dynamic. 

Curated and popular this week
Relevant opportunities