Hide table of contents

73

I have heard people who are uncertain about whether EA community building is the right move for them, given the increased prominence of AI Safety. I think that EA community building is the right choice for a significant number of people, and wanted to lay out why I believe this.

AI Safety Community building seems important

I’m excited to see AI Safety specific community building and I hope it continues to grow. This piece is not intended to claim that no-one should be working on AIS community building. Although CEA’s groups team is at an EA organisation, not an AI-safety organisation. I hope we can collaborate with AI Safety groups, as:

  • It would likely benefit both parties to synch on issues like data collection
  • I think there are lessons learned from EA community building that would be relevant and valuable to share

The reasons that I think the case for AI Safety community building is strong, are:

  • If we want people to work in AI Safety, directly talking about AI Safety seems the most straightforward way to do this
  • There are talented people who will find the AI Safety framing attractive, but would not like the EA framing
  • Early AIS community building efforts have managed to attract significant numbers of talented individuals (although I don’t think it’s inevitable that these early wins will scale, or successfully avoid causing accidental harm)

EA community building is also important

I think EA community building is still very valuable, for five reasons

  1. EA groups have been successful
    1. In the 2020 EA and Longtermist survey, local groups were mentioned by 42% of respondents
  2. I care about EA values in decision makers during crunch time. E.g., I think people in the EA movement have thought unusually deeply about what catastrophes would and wouldn’t lead to the loss of humanity’s future potential
  3. Having a compelling answer to the question “how do I do the most good” or “how do I live a good life” has been something that has historically attracted a lot of talent, and talent that would not necessarily have been attracted by AI Safety (conversely I expect AI Safety groups to attract people who wouldn’t be drawn to discussions of “how do I do the most good”)
  4. Specific, talented, organisers can be a better fit for either AIS or EA, and I want both options to exist
    1. I think for both options to exist, both options need to have great organisers. If all of the best organisers went for a single option, I think the other option would either become irrelevant, or cease to exist.
  5. It seems important to note that it is still possible that the risks from AI don’t manifest in the way that EAs widely expect, in which case we’ll be glad that we have a network of people that care about EA ideas

I want to see collaboration between EA and AIS community building

Although this isn’t the reason I’d like to see AIS CB, there is an extra benefit: I think work on the most pressing problems could go better if EAs did not form a super-majority (but still a large enough faction that EA principles are strongly weighted in decision making)

Since the FTX crisis there has been increasing discussion about trustingness amongst EAs. Although I think the FTX crisis could have happened in less trusting communities (e.g., Many VCs also lost money in FTX) - I think it is true that there are areas where high trust is harmful. I think operating in an environment where EAs aren’t a super-majority would improve certain processes that currently overly rely on trust. Additionally I think having EA form a part of your identity can cause in-group effects, where ideas from the outgroup aren’t taken seriously enough. I suspect this would be lessened if people identifying as EA didn’t form a majority

Based on the above, EA community building should update some of the ways in which it works

I think this has some updates on how EA groups should operate (this was written with city and national groups in mind, but parts are more widely applicable)

At the top of funnel

  • Targeting outreach: If there are also AI Safety groups operating in the same area as you, the counterfactual impact of attracting someone who is in the AI Safety groups target audience is lower. Nonetheless it remains (importantly) true that people other than machine learning experts can contribute to the world’s most pressing problems
  • Less “defensive” messaging - as EA moved from global health, to AI safety, the core EA principles remained the same, but the messaging changed. There was a need to show that these ideas aren’t “too weird”. As AI Safety is normalised, EA messaging should be more similar to how it was when EA was principally about global health interventions.

    Note that less defensive messaging doesn’t mean jumping straight to AI. I think it’s important to 
    1. Be serious about trying to work out what the most important thing is
    2. Be open, early, that AIS is the best guess of many people right now
    3. Don’t forget that AIS is not be right for everyone, and also we might be wrong about AIS
  • In a world where lots of people working on the most important problems don’t identify as EA, I believe that EA groups should, on the margin, have a greater focus on EA ideas, relative to EA community. I believe that community is important (see importance of personal connections here), and most groups that have produced a lot of value have both a focus on ideas AND a strong community. However I think (a) it is more common to over-focus, rather than under-focus on community, (b) a community built around the discussion of ideas is likely to attract people who can improve the world. Concretely, focussing on EA ideas might mean: Reducing the number of socials and increasing the number of fellowships, learning projects, discussions (importantly, there can be social elements to these)

In the rest of the funnel (with much lower confidence)

  • More partnerships/alliances
    • Partnerships can be a pipeline into discovering EA ideas, as well as a pipeline to important positions
    • Strengthens the idea of having people working on the most important problems rather than in EA institutions
    • I think EAs significantly outperform other communities when looking at bets on what are important issues for humanity (stemming from scout mindset and openness), and modestly outperform most other communities at forecasting. However I don’t think EAs have across the board good-judgement - and would benefit from partnering with people who have a good understanding of how things work in specific domains
  • A realism about where specific parts of the most important work will be done
    • It seems increasingly likely that the majority of cutting edge AI work will be done in the USA, and as the midgame is here - this seems less likely to change
      • Yes - valuable safety research can be done outside the US, but proximity to the bay area seems likely to help 
      • Yes - it does feel unfair that certain work requires being in certain countries. This unfairness is made even more unfair by uneven immigration laws
      • Despite the unfairness of the situation, I think it is important groups have a clear plan about how their efforts actually lead to people working on the most important problems 
    • There are other important pieces of work which are less geographically bound (and as such, might be a promising comparative advantages for local groups)

What worries me about these changes

  • Having an EA community gives people some sort of social permission to take important ideas seriously - e.g., someone’s first retreat is often disproportionately impactful. There is a risk of over-correcting and neglecting community aspects of EA groups. 
  • The community collectively plays some sort of filtering role, of moving the right people to the right opportunity - and reducing the focus on the community could reduce the ability to do this role. However the current filtering is significantly flawed, as it also filters on “people who like to hang out with EAs.”

 

HT: To the CEA Groups team, for their comments and ideas


 

73

0
0

Reactions

0
0

More posts like this

Comments7


Sorted by Click to highlight new comments since:

You say community building, but the specifics you describe seem more like recruiting and outreach. All three of those can be good things, but I think conflating them is unhelpful. I think this is especially true because EA is already very aggressive at recruiting and mediocre at post-recruitment support.

as EA moved from global health, to AI safety, the core EA principles remained the same, but the messaging changed.

I think that's the first time I've seen this written as clearly as here, and I don't really like it or agree. My impression is that there are many people attracted to EA not because of AIS, who also won't become interested in AIS/aren't the right fit for that field. If the money for community building comes mainly from an interest to attract more people into AIS (as it sounds here), and is mainly intended for that, why keep funding EA in general? I would welcome more nuanced portrayals what EA community building aims to support, like facilitating other types of longtermist career changes, creating an intellectual community motivated by similar moral goals, and supporting people who have changed their careers to stick with their paths.

On the last, and in line with what Elisabeth pointed to: I also get the impression that you forget to mention the value of community for keeping strong values, and sticking to your plan. Especially if you move in a work culture that incentivizes very different values than what EAs tend to value. Having a community of like-minded people with similar core values is important for those who won't change careers anymore, but want to stick to the highly impactful ones they have chosen to pursue. The value of community to them comes from helping them stick to their path.

I think that's the first time I've seen this written as clearly as here, and I don't really like it or agree

Apologies, I think I should be clear that when I say "the messaging changed" I'm just describing what I believed happened, not that I think it was a good thing. I agree that some people aren't interested in AIS, or aren't the right fit, but can still make the world substantially better. I do however think that we should openly say "we think AIS is an important cause area" and should spend less time arguing why that isn't a weird thing to think.

 

I also get the impression that you forget to mention the value of community for keeping strong values, and sticking to your plan

I agree that this is a value of community building, but it seems similarly relevant for explicitly longtermist community building and broad EA community building?

1 - All good, and sorry for my late reply :) I think I understand better what you meant now. 
2 - I'd agree, yes.

I agree that EA community building could be a good option for some subset of people who want to ensure that AI goes well.

There are some people who are well-positioned to do EA community building, but lack the skills to contribute towards AI governance or technical community building. Actually, I would go further and say that a much broader set of people would be suited for EA community building rather than anything AI safety specific.

That said, there are some other options you should consider too. If AI safety is what you care about and you don’t have sufficient AI safety or governance knowledge to work on it directly, you may want to consider doing either x-risk or longtermist community building to narrow the focus. On the other hand, it’s also very important to consider the interests of those in your area.

Additionally, you may also want to consider if you could have a greater impact by volunteering to provide ops support to someone working on AI safety movement building. That said, this requires you to be highly motivated and reliable - something is much harder than it seems - otherwise your impact might be minimal.

Being a highly motivated, reliable, and intelligent volunteer seems pretty underestimated as potential impact. With taking a salary position, your impact is something close to your superiority to the other person who might have your position. It's easy to imagine competent employees having a negative counterfactual impact due to displacing someone better.

On the other hand, if you are a reliable, motivated, intelligent volunteer, you are simply providing excellent resources. Volunteering for promising projects that fall in the funding cracks could be quite EV for those without financial means to help important projects.

But I would not recommend such volunteering unless you are serious... It's very easy to have negative value to an organization as a volunteer if you take from the time and resources from an organization and leaving shortly after, or stick around without actually doing much to help.

Thanks for this. I think it could've been more awesome by having a stronger statement on the importance of the EA ideas, values, and mindsets. I recognize that you somewhat mention this under reasons 2 and 5 but I would've liked to see it stated even more strongly.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read