[epistemic status: articulation of a position I kind of believe and think is under-articulated, but am unsure of the strength of]
I think EA has a lot of great ideas. I wish more people in the world deeply understood them, and took ~EA principles seriously. I'm very into people studying the bodies of knowledge that EA has produced, and finding friends and mentors in the ecosystem.
But I also think that EA is still a tiny corner of the world, and that there's a lot of important networks and knowledge beyond it. When I think about optimal allocation of people who are bought into EA, I want quite a lot of those people to go out and interact with different systems in the world, different peer groups; and learn from them, make connections.
In principle this should be pretty accessible. Except I worry about our implicit social structures sending the message "all the cool people hang around the centrally EA spaces" in a way that doesn't really support people to actually go and do these exploring moves while being engaged in and encouraged by EA.
I think that this is one of the (if not the) most important problems to fix in EA messaging / status-granting.[1] Note that I don't think we want to slow down people coming in to the EA bubble -- I think it's often healthy and good for people to get up to speed on a lot of stuff, to give them better context for subsequent decisions. So the challenge is to encourage people to graduate to exploring without making exploring itself so high-status that people jump directly there without learning the cool stuff that EA has to offer first.
What could we do about it? Some options:
- Encourage a narrative something like "when your EA learning slows down, that's often the time to dive back into the wider world"
- Celebrate people who follow this trajectory
- Make sure that community support structures are helpful and functional for people who have a lot of EA knowledge but are now exploring rather than "full time EA professionals"
I'd be keen to see fleshed out versions of these, or other ideas.
Absent good fixes here, I'm inclined to celebrate a certain amount of EA disillusionment: it seems important that a fraction of super talented people go and explore different areas, and if that's easier to access given disillusionment with EA then so much the worse for people's good opinions of EA. But this seems worse if something else could work, because of bad feeling, and making it harder for people to stop exploring mode and start working with the core of the community when that's correct.
N.B. I'm making a directional claim here. Of course it's quite possible to imagine getting to a stage where too many people go and explore, evaporating the pool of people trying to work on the most crucial things. What would be too much exploration? My guess is that in equilibrium the ideal might be between 10% and 20% of the people who are sufficiently skilled up to do really important work in the core should be exploring instead. And a larger group around them who can't yet find crucial work in the core (but hope to some day) should also do this. But I don't put that much stock in my numbers; I'm interested in takes from people who would go higher or lower.
- ^
Another candidate: wanting people who can think for themselves, but granting social status to people who appear to come to the same conclusions as leadership.
I appreciate you taking the time to write these thoughts, Owen, because they address a question I've been having about EA thanks to all the recent publicity. "How much does the EA community know about the field of Social Impact generally and all of its related areas of expertise?" (ex. Impact Investing, Program Evaluation, Social Entrepreneurship, Public Health, Corporate Social Responsibility, etc.) I don't consider myself an effective altruist, but I have worked/taught in the field of Social Impact for about 16 years.
I've been wondering about this because the public discourse of EA seems to focus on only a few things: utilitarianism, GiveWell-recommended charities, animal welfare, and longtermism/existential risks. I know this isn't a comprehensive picture. As MaxRa pointed out, 80,000 hours is representing a wide-range of areas for impact. But, for example, I don't know how much the 80,000 Hours pluralism penetrates the group that takes the Giving What We Can pledge or the members of this forum.
Does the EA community consider itself embedded in the field of Social Impact, or as something distinctly different?
To answer your original point about getting out of EA bubbles, the book Impact by Sir Ronald Cohen is a nice, relatively recent survey of Social Impact and is chock full of examples. All the areas he covers are where EA could find likeminded people with useful expertise (along the lines of what DavidNash mentioned).