Hide table of contents

(This is a section from a previous post that combined two ideas into one.  I thought it would be good to separate out the second idea and explore it more.)

 

It may be better to see EA as coordination rather than a marketing funnel with the end goal of working for an 'EA organisation'. 

There is still a funnel where people hear about EA and learn more but they use the frameworks, questions and EA network to see what their values are and what that means for cause[1] and career selection.

The left side of the diagram below is similar to the original funnel model from CEA, with people engaging more with EA. Rather than seeing that as the endpoint, people can then be connected to individuals and organisations in the fields they have a good fit for.

Focusing on the right side of the diagram, I've tried to represent some fields that people often consider after looking into EA. The size of the boxes aim to represent the different sizes of the fields[2], and how much overlap[3] they have with EA.

 

What could be seen as EA is the meta research, movement building and crosscutting support. Whereas organisations working on a specific cause area are in a separate field rather than part of EA (which isn't a bad thing).

It's possible that by focusing on EA as a whole rather than specific causes, we are holding back the growth of these fields. It would be surprising if the best strategies for each field were the same as the best strategy for EA.

 

What would visualising EA in this way mean for movement building?

  • More movement building on the field specific level
    • Support for cause areas to have their own version of the Centre for Effective Altruism and equivalent meta organisations
  • Less emphasis on leading people down a chain of reasoning (for example, effective altruism-->longtermism --> existential risks --> biosecurity), where they may drop off at any point, and more emphasis on connecting people directly to a field
  • More research on how to find, incubate and grow causes
    • This could lead to more meta organisations (the Centre for Effective Centre's)

One example would be when designing an EA conference the attendees would mainly be people who are undecided as to which cause/career to go into, people that can help them decide,  key EA stakeholders from each field, and people in nascent fields . This is compared to a conference that had many people that had already decided which cause area to focus on, they would probably find more value from going to a conference tailored to help dive into higher level questions where everyone had a deeper shared level of understanding.

One key issue is that there are organisations for specific causes but they tend to focus on research first, comms or lobbying second, and community building is third or fourth in their priorities. The organisation might occasionally arrange a conference every few years or some fellowships, but they generally don't have their top goal as movement building.  When something is a third priority, it often doesn't get done well, or happen at all.  This is in comparison to CEA, which I think has helped grow EA by having movement building be the top priority.

There are some projects in these spaces and I've attempted to list a few of them here, but there are still quite a few gaps, and the organisations that do exist are generally small and don't face much competition.

 

Field Building Gaps

  • Global Development
  • Longtermism
    • Giving money- Not much for individuals, but foundations are attempting to work out what to fund, and there is the Long Term Future Fund
    • Career - 80,000 Hours
    • Coordination - There doesn't seem to be any one organisation doing this, although there are a variety of projects like this and there is a newsletter
  • AI Alignment
    • Giving money - There is a yearly post by Larks, given that it seems that it isn't hard to fund good projects in this space, this probably isn't much of an issue
    • Career - AI Safety Support for technical research and 80,000 Hours for technical and policy
    • Coordination - The Future of Life Institute has organised some small conferences, but their remit is wider than just AI alignment
  • Animal Welfare
    • Giving money - Animal Charity Evaluators
    • Career - Animal Advocacy Careers
    • Coordination - Not much but there is a new project aiming to cover this gap
  • Alternative Proteins
    • Good Food Institute seem to help coordinate money, careers and the field as a whole
  • Biosecurity
    • I couldn't think of an organisation for any of the three categories, although there are informal networks and a new hub being set up in Boston
  • Existential Risk
    • Careers - 80,000 Hours
    • Coordination - CSER, FHI, GCR and FLI all do parts of this, but they tend to focus on research rather than having movement building be a top priority
  • Suffering Risks
    • Coordination - Center on Long-Term Risk, although they focus mainly on research
  • Environmentalism
    • Giving money - Giving Green, Founders Pledge
    • Careers - Work On Climate has a very active slack community but is only tangentially related to EA
    • Coordination - Effective Environmentalism, but it's only volunteer run

There are lots of other causes that could be added here and they often have much less field building infrastructure.

With fields that are already large there are usually organisations that do some of this work, and it may not help to reinvent the wheel.  Although it is still worth considering if there is value to coordinating the people interested in EA within a larger cause. One example is that there are thousands of global development conferences, but none for EA & global development. I think there would be value to having that organised, allowing for people in EA to tackle the most important questions in the field, and allowing people in global development to get a strong intro to EA if it is their first event.

 

If anyone is interested in tackling one of these gaps, I'd love to chat about it and see if there is a way I can help, just send me a message.

  1. ^

    I use field and cause interchangeably throughout

  2. ^

    If this was done to scale, then the amount of money/people/organisations in global development would probably be hundreds or thousands of times bigger than the other fields. 

    Also lots of interventions often help in multiple fields, for example alt proteins impacting climate change, land use, etc. I haven't attempted to take that into account.

  3. ^

    This is a rough guess and represented by how much of the green funnel overlaps with the different fields

  1. ^

    I mean an organisation that does some of the following; conferences and other events, online discussion spaces, supporting subgroups and organisers, outreach, community health, connecting members of the network with each other

Show all footnotes
Comments3


Sorted by Click to highlight new comments since:

I found the concrete implications distinguishing between this more cause-oriented model of EA really useful, thanks!

I also agree, at least based on my own perception of the current cultural shift (away from GHD and farmed animal welfare, and towards longtermist approaches), that the most marginally impactful meta-EA opportunities might increasingly be in field-building.

When I was at EAG London this year, I noticed that there was a fair amount of energy and excitement towards AI Safety specific field building. I'm fairly keen on this since a lot has to go right in order for AI safety to go well and I think this is more likely when there are people specifically trying to develop a community that can achieve these goals, rather than having to compromise between satisfying the needs of the different cause areas.

One thought I had: If there is an EA conference dedicated to a specific cause area, it might also be worthwhile having some boothes related to other EA causes areas in order to address concerns about insularity.

I think this is a good idea. I feel there might be enough for EA adjacent to Progress Studies for this to be a field. I think Tom Westgarth was interested here too and in London you have a small progress cluster.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or
Recent opportunities in Building effective altruism
48
Ivan Burduk
· · 2m read