TL;DR
- Most community-building effort currently goes towards creating new, highly-engaged EAs, yet the vast majority of people who can do things to help further EA goals will not wish to be highly-engaged.
- You don’t need to actually be an “EA” to do effectively altruistic things, hence why influencing people with EA ideas can be very useful.
- While we already do some influencing, we might want to do more on the margin, especially if we feel urgency about problems like AI alignment.
- EA is special; we should try sharing the ideas/mental models/insights we have with those who can do good things with them.
- It would be useful to have an idea as to how cost-effective it is to try influencing others relative to creating highly-engaged EAs.
Epistemic status
Quite uncertain. I’m more confident in the sign of these arguments than the magnitude. This post was mostly informed by my impressions from being highly involved within the EA community over the last year or so, as well as the time I spent working at Giving What We Can and my current work at GovAI. All views articulated here are my own and do not reflect the stances of either of these organisations, or any other institutions I’m affiliated with. Finally, I’m hoping to spark a conversation rather than to make any sweeping declarations about what the community’s strategy ought to be.
I was inspired to write this post after reading Abraham Rowe’s list of EA critiques he would like to read.
Introduction
I’m writing this post because I think that gearing the overwhelming majority of EA community building effort towards converting young people to highly-engaged EAs might neglect the value of influence, to the detriment of our ability to get enough people working on difficult problems like AI alignment.
Providing altruistically-inclined young people with opportunities to pursue highly-impactful career paths is great. I’m happy this opportunity was provided to me, and I think this work is incredibly valuable insofar as it attracts more smart, ambitious, and well-intentioned people into the EA community for comparatively little cost. But my impression is that I (alongside other highly-engaged EAs) am somewhat unusual with respect to my willingness to severely constrain the list of potential career paths I might go down, and how much I weigh my social impact relative to other factors like salary and personal enjoyment.
Most people will be unwilling to do this, or might be initially turned off from the seemingly large commitment that comes with being a highly-engaged EA. Does that mean they can’t contribute to the world’s most pressing problems, or only do so via donating? I don’t think so — working directly on pressing problems doesn’t necessarily have to be all or nothing. But my outside impression is that the current outreach approach might neglect the value of influencing the thinking of those who might never wish to become highly involved in the community, and/or those who already have influence over the state of affairs. You know, the people who (by and large) control/will control the levers of power in government, business, academia, thought leadership, journalism, and policy.
To be clear, I don’t think we should deliberately soft sell EA[1] when communicating externally, but we should be aware that some ideas that sound perfectly reasonable to us might sound off-putting to others. Moreover, we should also be aware that packaging these ideas together as part of the “EA worldview” might risk even just one weird part of the package putting someone off the other ideas (that they might otherwise endorse!) entirely. In this case, it could be better to just strategically push the core thing we want the person/institution to learn about — the importance of cost-effectiveness when allocating charitable resources, or why preventing pandemics is important, for example.
What I mean by influence
My conceptualisation of influence is any outreach that nudges people to do things that are (lower case) ea, rather than trying to guide them directly into the (upper case) EA community. Put more plainly, this does not mean bringing these people into the EA community en masse, nor does it necessarily mean blindly spreading EA memes as far and wide as possible.[2] All it means is doing things that will cause people outside of the EA community to pursue things the EA community is keen on doing, like preventing pandemics, improving animal welfare, aligning AI, etc.
By this definition, the following things[3] would count as immediately influential:
- Using an existing relationship with a government official to meet with a UK MP to chat about the importance of crafting legislation that considers future peoples’ interests.
- Hosting a talk at a computing hardware conference on why compute governance could be important for making AI safer.
- Setting up an EA organisation/think-tank in DC that is explicitly focused on influencing US policymakers and politicians.[4]
- Running for political office.[5]
- Going on a popular podcast to talk specifically about pandemic prevention.[6]
- Lobbying congress/other political bodies to increase foreign aid expenditure to low-income countries.
On the contrary, the following things would count as influential over the medium/long term:
- Hosting a talk at a US university on the dangers of gain-of-function research, where one of the student attendees eventually goes on to work in a senior position at the CDC (but not explicitly for “EA” reasons).
- Working with university professors to integrate core EA concepts into undergraduate classes, or creating new classes that cover core ideas in EA entirely.
- Offering funding for PhD students that are tied to conducting research on alternative proteins.
Side note: don’t we already do these things?
You might (reasonably) read the above list and think to yourself “don’t we already do these things?”. I don’t think the ideas I’ve shared above are particularly groundbreaking, nor are they entirely neglected. But my off-the-cuff impression is that the majority of outreach effort in the EA community is focused on other theories of change — find the smartest, most ambitious young people, and get them into the EA community. I would love to see other more targeted and ambitious efforts to influence others where the KPI isn't the number of highly-engaged EAs created.
Value drift, and the tradeoffs of slow growth
I don’t know how to give a precise answer to “how big should EA be” and “how fast should EA grow”, as the tradeoffs involved in thinking about these questions are highly uncertain. While I will briefly touch on these questions, my central claim instead will be that while getting more highly-engaged EAs is great, it might be even better if we focused a bit more of the community’s marginal attention towards influencing others with EA tools, principles, and mental models. To caveat this, I’m more confident in the sign of this claim rather than the magnitude.
I’m sympathetic to the claim that value drift is a serious concern from growing the EA community too quickly. Indeed, value drift could be quite bad. I would hate to see our impact stymied by an erosion of the principles that make EA such a promising avenue for change. But the tradeoffs are real. For example, depending on your views on AI timelines, there are plausible claims to be made that the number of people currently doing useful work on AI safety/governance is far too small, and growing far too slowly given the urgency of the problem.
Beyond influence: getting non-EAs to directly help the community to achieve its goals
Let’s extend this model to the EA movement as a whole. With a total of ~10,000 EAs, and an annual growth rate of 10 to 20 percent (and my best guess is that the bulk of this growth comes from relatively junior EAs), do we really think we can align AI, prevent the next pandemic, transition past factory farming, and eliminate extreme poverty in a reasonable amount of time? I like to lean optimistic, but even I have my limits.
Whether we like it or not, it seems likely to me that we will need people to work on these problems (particularly on the ground level[7]) who don’t use Bayesian language, never drink Huel, prefer other hobbies more than forecasting, and think “EA” is a gaming company. Again, we might not wish to try growing the EA community with these people. Most will likely never wish to join. But if we can influence these individuals and/or institutions to do things like thinking about scale, neglectedness, and tractability when deciding on what to focus on, that could be extremely valuable — possibly as valuable as adding new highly-engaged EAs to the community. This strategy also partially mitigates some concerns about value erosion within the community by scaling too quickly.
Note: If you find these claims unconvincing, I encourage you to comment with your vision for how we solve these big problems over the next few decades while growing 10 to 20 percent per year and keeping our external influence roughly as it is now. I’m not by any means convinced that I’m right, but I am having a tough time envisioning this theory of change being sufficient for getting the job done. I would love to hear alternative perspectives that I might not have considered.
What makes EA special
One of the most special parts about EA, in my view, is the epistemic culture that has been carefully built in the community. It’s this culture that allows us to take weird ideas seriously; to reason under extreme uncertainty and ambiguity; to strive to be truth seeking; to prioritise, and then try tackling the most important problems head on; and to coordinate a bunch of people with different worldviews under the same umbrella
I think it would be a disservice if we largely kept these ideas to ourselves, or only measured the success of our outreach by the number of highly-engaged EAs we bring into the community. At the end of the day, having more EAs is only instrumentally useful. I don’t care if EAG conferences grow in attendance, or if I have more people to talk about fun thought experiments with; I care about solving AI alignment, ending poverty, and improving animal welfare, whether those who are doing useful work to make progress on these challenges are card-carrying EAs or not. Being insular might allow us to avoid the challenges and risks of outreach, but this strategy can be costly.
To make this argument more concrete, let’s use the numbers from this Twitter thread by Ben Todd. It would be a huge win for the community if we could double the number of people working on AI safety from ~100 to ~200 by 2023. All else equal, I think we should spend a significant portion of our attention and resources on making that happen. But what if we also managed to influence the 100,000 researchers working on capabilities to care even just a little bit more about safety?[8] Of course, these things aren’t mutually exclusive, and there are spillovers[9] from one to the other. Moreover, influencing 100,000 capabilities researchers might be quite difficult. But I think it might be risky to hope that we will indirectly influence those outside of the community without explicitly aiming to do so, and I would like to see some analysis on whether or not influence is a cost-effective use of resources.
Conclusion
All in all, I think external influence might be neglected by the EA community. This claim is based on the following seven premises:
- Most community-building effort currently goes towards creating new, highly-engaged EAs.
- The vast majority of people who can do things to help further EA goals will not wish to become highly-engaged EAs.
- You don’t need to be a highly-engaged EA (upper-case) to do ea (lower-case) things.
- We already do some influence-related things, but we should do more on the margin.
- Staying both relatively small and insular will make solving our priority problems quite difficult.
- EA has special epistemic norms, and we should try hard to share them with others who control the levers of power without explicitly trying to bring them into the community.
- It would be useful to have an idea as to how cost-effective it is to try influencing others relative to creating highly-engaged EAs.
I don’t have the answers to these challenges. My aim with this post is to start a conversation, and to hear from others who have a different view on our need (or lack thereof) to influence others outside of the community. I encourage you to comment below to get the conversation started.
Open questions I’m uncertain about
- Are there any organisations/projects explicitly focused on influence that I might’ve forgotten, neglected to properly highlight, or am otherwise unaware of? Or am I just completely downplaying how much influence we’re already trying to have?
- Is there some mechanism that I haven’t considered wherein creating highly engaged EAs could be the best way — or at least a reasonably good way — to influence others?
- What are some downsides/risks with trying to spread parts of the EA toolkit — e.g., understanding cost-effectiveness, the ITN framework, expanded moral circles, the importance of future generations, the scout mindset — to external stakeholders?
- It seems obvious that getting people to do robustly good ea things is, well, good. But perhaps it is significantly more costly/difficult to influence people than to create new highly-engaged EAs. If so, how costly is it? What are the bottlenecks, if there are any?
- If it turns out that (1) influence is important, and (2) the community actually is neglecting influence, what can be done about it?
Acknowledgements
Thank you to Luke Freeman, Grace Adams, Nathan Young, Caleb Parikh, Michael Townsend, Frances Lorenz, Bella Forristal, Trevor Levin, and others who I spoke about this with in person for your thoughts and feedback on this piece.
Appendix 1: The difference between influence and bringing mid-career people into the EA community
I think influencing others and bringing mid-career people into the EA community are different, but I also think these efforts are important and probably interrelated.
My mental model is that it could be highly valuable to influence, say, (1) someone who has 20~ years of experience working for the US Office of Science and Technology Policy, or (2) someone who is an influential public intellectual in the technology space, or (3) some key decision maker in the military, (4) some senior person at the UN, (5) or some member of parliament, etc.
That being said, it would be great if we had some more mid-career people (especially those with excellent managerial skills) to help the community directly scale up projects, or tenured academics switching to valuable research topics. My impression is that there is already some rumbling about this in the community.
Perhaps the people in the former group could end up wanting to work directly at an EA/EA-adjacent org. But if not, I think it is still valuable to influence their worldviews.
- ^
H/T to Nathan Young for mentioning this.
- ^
That being said, I still think it’s quite good to share robustly good EA ideas through mass media, like moral circle expansion or the importance of donating to charity based on cost-effectiveness. I’m happy to see WWOTF is furthering the longtermism discourse to a much wider audience, for example.
- ^
Notably, I’m not making a judgement about the relative pros/cons of these strategies. There is plenty of room to debate whether or not these things would be good in expectation.
- ^
I think there are a few organisations that do this, but I’d like to see more.
- ^
Of course there was Carrick Flynn’s campaign. I think this was great and I’d be happy to see others make similarly ambitious bets.
- ^
See Will MacAskill in August 2022.
- ^
We might not need everyone working directly in EA organisations to be entirely value-aligned. Other organisations, both good and harmful, can get people to buy into their goals by doing simple things like paying good salaries. To scale, we will need people to do things like operations, recruiting talent, providing legal counsel, or managing researchers. It might not really matter if these people have joined a virtual program, attended a conference, read The Precipice, or have taken the Giving What We Can Pledge. H/T to Frances Lorenz for pointing this out.
- ^
You might (reasonably) think that the marginal AI safety researcher is an order of magnitude more important than influencing capabilities researchers, and/or important decision-makers at AI labs, but I’m not sure this strikes me as obviously true.
- ^
It seems reasonable to think that growing the community of people working on AI safety would provide greater credibility to safety arguments, thereby influencing people who are working on capabilities research.
I think this is very good and highlights a good point: that reaching people outside of EA is crucial to achieving much of what we want to achieve; and we don't need those people to become "EAs" for it be valuable.
It seems like there is a quality and quantity trade-off where you could grow EA faster by expecting less engagement or commitment. I think there's a lot of value in thinking about how to make EA massively scale. For example, if we wanted to grow EA to millions of people maybe we could lower the barrier to entry somehow by having a small number of core ideas or advertising low-commitment actions such as earning to give. I think scaling up the number of people massively would benefit the most scalable charities such as GiveDirectly.
The counterargument is that impact per person tends to be long-tailed. For example, the net worth of Sam Bankman Fried is ~100,000 higher than a typical person. Therefore, who is in EA might matter as much or more as how many EAs there are.
My guess is that quality matters more in AI safety because I think the talent necessary to have a big positive impact is high. AI safety impact also seems long-tailed.
It's not clear to me whether quality or quantity is more important because some of the benefits are hard to quantify. One easily measurable metric is donations: adding a sufficiently large number of average donators should have the same financial value as adding a single billionaire.
I suppose this mostly has to do with growing the size of the "EA community", whereas I'm mostly thinking about growing the size of "people doing effectively altruistic things". There's a big difference in the composition of those groups. I also think there is a trade-off in terms of how community building resources are spent, but the thing about trying to encourage influence is that it doesn't need to trade-off with highly engaged EAs. One analogy is that encouraging people to donate 10% doesn't mean that someone like SBF can't pledge 99%.
Yup, agreed. This is my model as well. That being said, I wouldn't be surprised if the impact of influence also follows a long-tailed distribution: imagine if we manage to influence 1,000 people about the importance of AI-related x-risk, and one of them actually ends up being the one to push for some highly impactful policy change.
Agreed. I'm similarly fuzzy on this and would really appreciate if someone did more analysis on this rather than deferring to the meme that EA is growing too fast/slow.
I agree with this entirely. I submitted a post in which I speak to this very idea, (not as clearly and pointedly as you have done):
"What I see missing, is promotion of the universal benefits of equality, altruism, and goodwill. Here I mean simple altruism, not necessarily effective altruism. Imagine if only 20% of the population worked for the greater good. Or if every person spent 20% of their time at it? Convincing more of the world population to do right by each other, the environment, animals, and the future, in whatever capacity possible, seems to me to be the best investment the EA community could make. Working at a local soup kitchen may not be the most effective/efficient altruistic pursuit, but what if everyone did something similar, and maximized their personal fit? I have trouble thinking of a downside, but am open to counterpoint ideas. "
I am a mid-career professional, who only discovered EA a year ago, FWIW.
+1, EA is a philosophical movement as well as a professional and social community.
I agree with this post that it can be useful to spread the philosophical ideas to people who will never be a part of the professional and social community. My sense from talking to, for example, senior professionals who have been convinced to reallocate some of their work to EA-priority causes is that this can be extremely valuable. Or, I've heard some people say they value a highly-engaged EA far more than a semi-engaged person, but I think they are probably underweighting the value of mid-to-senior people who do not become full-blown community members but nevertheless are influenced put some of their substantial network and career capital towards important problems.
On a separate note, I perceive an extremely high overlap between the "professional" and "social" for the highly-engaged EA crowd. For example, my sense is that it's fairly hard to get accepted to EA Global if your main EA activity is donating a large portion of your objectively-high-but-not-multimillionaire-level tech salary", i.e. you must be a part of the professional community to get access to the social community. I think it would be good to [create more social spaces for EA non-dedicates](https://forum.effectivealtruism.org/posts/aYifx3zd5R5N8bQ6o/ea-dedicates).
When I got involved in kick-starting our local student chapter, I noticed most of our ideas initially drifted to some form of influencing, but we ended up “correcting” that to what has become an internal motto: quality over quantity. While I still think it's a good initial strategy for a student chapter, your argument did make me think about missed opportunities in influence.
For example, I was recently offered the opportunity to help build the syllabus for an Ethics in Computer Science course, as well as helping create social responsibility modules for an Intro to Econ course. My initial reaction was to prioritize the student chapter, but I can now see a potential opportunity to align both.
I think you're right on the premise that there's a way to influence people that simultaneously doesn't run the risk of value drift or unintentionally misrepresenting the EA community to the world; this is probably more in line with traditional education, campaigning, lobbying, and activism. In my (limited) experience in the community, there seem to be many low-hanging fruits in this regard, but there have been advances in this direction, as yesterday's post on the Social Change Lab seems to show.
I think that the value is going to vary hugely by the cause area and the exact ask.
So I suspect that the value of producing more highly-enaged people actually stacks up better than many people think.
On the other hand, I agree with the shift towards engaging more with the public, which seems necessary at this stage if we don't want to be defined by our critics.
For global health & development, I think it is still quite useful to have influence over things like research and policy prioritisation (what topics academics should research, and what areas of policy think tanks should focus on), government foreign aid budgets, vaccine r&d, etc. This is tangential, but even if Dustin is worth a large number of low-value donors (he is), the marginal donation to effective global poverty charities is still very impactful.
For AI, I agree that it is tricky to find robustly net-positive actions, as of right now at least. I expect this to change over the next few years, and I hope people in relevant positions to implement these actions will be ready to do so once we have more clarity about which ones are good. Whether or not they're highly engaged EAs doesn't seem to matter inasmuch as they actually do the things, IMO.
100%. And get people
… to causes and interventions that are justifies by these criteria (e.g., farmed animal welfare).
I suspect that intentional effective donations are a good proxy metric for the other activities with more difficult feedback loops.
I really enjoyed reading this. Sharing information is so important when it comes to influencing those levers you mention.
This got me thinking about how this applies to the Alternative Protein Industry, and how it solves for the same problems as the Effective Animal Advocacy movement (even though many in the alt protein space may not know much about EA, so probably not EA-aligned, but can still do a impactful work for the EAA space).
I have wondered before, what good would it do if we flipped all of the past governmental and industry investments in farmed ag research on 'how to breed for productive animals' to 'here's a shortcut to selecting the best donor animal cells for cellular agriculture, and the best gene variants for fermentation-derived alt proteins'. Of course, there's foundational considerations like 'well those variants may not be optimal for proteins grown outside of the animal' etc., but I think the industry isn't tapping in to that information that's already there. I've been toying with the idea of drafting a piece on this for open source dissemination (but between procrastination and not really knowing where to start, its still just an idea bouncing around my head). Maybe its the animal geneticist in me that thinks the overlap would be good, whereas someone less immersed in it might think its not so useful?
Thoughts welcome!
Yes, I think this is a crucial point. In general, it seems like there are not many cost-effectiveness analysis on community building.
The model I keep using in my head to think about these things is the Catholic Church. (Maybe not surprising for an organization that encourages tithing.) There is a highly trained priesthood that thinks very hard about how people can live a moral life and then there is the very much larger body of practicing Catholics. A lot of the quality vs quantity arguing that I see is akin to insisting that all Catholics become priests.
This model would argue for less emphasis on building communities of highly-engaged EAs and more on building communities AROUND highly engaged EAs that can guide less-engaged members through the strength of their relationships with these people. I don't know what the right ratio of "priests" to "practicers" maximizes impact - and really liked Chris Leong's point about it probably being different for different challenges - but I suspect there's a pretty steep opportunity cost to not filling those pews.
This is a very cool model and I would absolutely be thrilled to see someone write up a post about it!