Hide table of contents

I recently attended EAGxSingapore. In 1-1s, I realized that I have picked up a lot of information from living in an EA hub and surrounding myself with highly-involved EAs. 

In this post, I explicitly lay out some of this information. I hope that it will be useful for people who are new to EA or people who are not living an EA Hub. 

Here are some things that I believe to be important “background claims” that often guide EA decision-making, strategy, and career decisions. (In parentheses, I add things that I believe, but these are "Akash's opinions" as opposed to "background claims.") 

Note that this perspective is based largely on my experiences around longtermists & the Berkeley AI safety community. 

General

1. Many of the most influential EA leaders believe that there is a >10% chance that humanity goes extinct in the next 100 years. (Several of them have stronger beliefs, like a 50% of extinction in the next 10-30 years).

2. Many EA leaders are primarily concerned about AI safety (and to a lesser extent, other threats to humanity’s long-term future). Several believe that artificial general intelligence is likely to be developed in the next 10-50 years. Much of the value of the present/future will be shaped by the extent to which these systems are aligned with human values.

3. Many of the most important discussions, research, and debates are happening in-person in major EA hubs. (I claim that visiting an EA Hub is one of the best ways to understand what’s going on, engage in meaningful debates about cause prioritization, and receive feedback on your plans.)

4. Several “EA organizations” are not doing highly impactful work, and there are major differences in impact between & within orgs. Some people find it politically/socially incorrect to point out publicly which organizations are failing & why. (I claim people who are trying to use their careers in a valuable way should evaluate organizations/opportunities for themselves, and they should not assume that generically joining an “EA org” is the best strategy.)

AI Safety

5. Many AI safety researchers and organizations are making decisions on relatively short AI timelines (e.g., artificial general intelligence within the next 10-50 years). Career plans or research proposals that take a long time to generate value are considered infeasible. (I claim that people should think about ways to make their current trajectory radically faster— e.g., if someone is an undergraduate planning a CS PhD, they may want to consider alternative ways to get research expertise more quickly).

6. There is widespread disagreement in AI safety about which research agendas are promising,  what the core problems in AI alignment are, and how people should get started in AI safety. 

7. There are several programs designed to help people get started in AI safety. Examples include SERI-Mats (for alignment research & theory), MLAB (for ML engineering), the ML Safety Scholars Program (for ML skills), AGI Safety Fundamentals (for AI alignment knowledge), PIBBS (for social scientists), and the newly-announced Philosophy Fellowship. (I suggest people keep point #6 in mind, though, and not assume that everything they need to know is captured in a well-packaged Program or Reading List).

8. There are not many senior AIS researchers or AIS mentors, and the ones who exist are often busy. (I claim that the best way to “get started in AI safety research” is to apply for a grant to spend ~1 month reading research, understanding the core parts of the alignment problem, evaluating research agendas, writing about what you’ve learned, and visiting an EA hub).

9. People can apply for grants to skill-up in AI safety. You do not have to propose an extremely specific project, and you can apply even if you’re new. Grant applications often take 1-2 hours. Check out the long-term future fund.

10. LessWrong is better than the EA Forum for posts/discussions relating to AI safety (though the EA Forum is better for posts/discussions relating to EA culture/strategy)

Getting Involved

11. The longtermist EA community is small. There are not tons of extremely intelligent/qualified people working on the world’s most pressing issues. There is a small group of young people with relatively little experience. We are often doing things we don’t know how to do, and we are scrambling to figure things out. There is a lot that needs to be done, and the odds that you could meaningfully contribute are higher than you might expect. (See also Lifeguards)

12. Funders generally want to receive more applications. (I think most people should have a lower bar for applying for funding).

13. If you want to get involved but you don’t see a great fit in any of the current job openings, consider starting your own project (get feedback and consider downside risks, of course). Or consider reaching out to EAs for ideas (if you're interested in longtermism or AI safety, feel free to message me). 

I am grateful to Olivia Jimenez, Miranda Zhang, and Christain Smith for feedback on this post.

Comments16
Sorted by Click to highlight new comments since: Today at 1:04 AM

This post is mostly making claims about what a very, very small group of people in a very, very small community in Berkeley think. When throwing around words like "influential leaders" or saying that the claims "often guide EA decision-making" it is easy to forget that.

The term "background claims" might imply that these are simply facts. But many are not: they are facts about opinions, specifically the opinions of "influential leaders"

Do not take these opinions as fact. Take none for granted. Interrogate them all.

"Influential leaders" are just people. Like you and I, they are biased. Like you and I, they are wrong (in correlated ways!). If we take these ideas as background, and any are wrong, we are destined to all be wrong in the same way.

If you can, don't take ideas on background. Ask that they be on the record, with reasoning and attribution given, and evaluate them for yourself.

I've mostly lived in Oxford and London, and these claims fit with my experience of the hubs there as well. I've perhaps experienced Oxford as having a little less focus on AI than #2 indicates. 

While I agree the claims should be interrogated and that the 'influential leaders' are very fallible, I think the only way to interrogate them properly is to be able to publicly acknowledge that these are indeed background assumptions held by a lot of the people with power/influence in the community. I don't see this post as stating 'these are background claims which you should hold without interrogation' but rather 'these are in fact largely treated as background claims within the EA communities at the core hubs in the Bay, London and Oxford etc.'. This seems very important for people not in these hubs to know, so they can accurately decide e.g. whether they are interested in participating more in the the movement, whether to follow the advice coming from these places, or what frames to use when applying for funding. Ideally I'd like to see a much longer list of background assumptions like this, because I think there are many more that are difficult to spot if you have not been in a hub. 

I agree with most of what you are saying.

However, the post seemed less self-aware to me than you are implying. My impression from interacting with undergraduates especially, many of whom read this forum, is that "these cool people believe this" is often read as "you should believe this." (Edit: by this paragraph I don't mean that this post is trying to say this, rather that it doesn't seem aware of how it could play into that dynamic.)

Thus I think it's always good practice for these sorts of posts to remind readers of the sort of thing that I commented, especially when using terms like "influential leaders" and "background claims." Not because it invalidates the information value of the post, but because not including it risks contributing to a real problem.

I didn't personally feel the post did that, hence my comment.

In addition, I do wish it were more specific about particularly which people it's referring to, rather than some amorphous and ill-defined group.

xuan
2y28
0
0

I felt this way reading the post as well "many of the most influential EA leaders" and "many EA leaders" and feels overly vague and implicitly normative. Perhaps as a constructive suggestion, we could attempt to list which leaders you mean?

Regarding 10% chance or greater of human extinction, here are the people I can think of who have expressed something like this view:

  • Toby Ord
  • Will MacAskill
  • 80k leadership
  • OpenPhil leadership

Regarding "primarily concerned with AI safety", it's not clear to me whether this is in contrast to the x-risk portfolio approach that most funders like OpenPhil and FTX and career advisors like 80k are nonetheless taking. If you mean something like "most concerned about AI safety" or "most prioritize AI safety", then this feels accurate of the above list of people.

To the extent possible, I think it'd be especially helpful to list the several people or institutions who believe in 50% chance of extinction, or who estimate AGI in 10 years vs 30 years vs 50 years, and what kind of influence they have.

+1 on questioning/interrogating opinions, even opinions of people who are "influential leaders."

I claim people who are trying to use their careers in a valuable way should evaluate organizations/opportunities for themselves

My hope is that readers don't come away with "here is the set of opinions I am supposed to believe" but rather "ah here is a set of opinions that help me understand how some EAs are thinking about the world." Thank you for making this distinction explicit.

Disagree that these are mostly characterizing the Berkeley community (#1 and #2 seem the most Berkeley-specific, though I think they're shaping EA culture/funding/strategy enough to be considered background claims. I think the rest are not Berkeley-specific).

I appreciated the part where you asked people to evaluate organizations by themselves. But it was in the context of "there are organizations that aren't very good, but people don't want to say they are failing," which to me implies that a good way to do this is to get people "in the know" to tell you if they are the failing ones or not. It implies there is some sort of secret consensus on what is failing and what isn't, and if not for the fact that people are afraid to voice their views you could clearly know which were "failing." This could be partially true! But it is not how I would motivate the essential idea of thinking for yourself.

The reason to think for yourself here is because lots of people are likely to be wrong, many people disagree, and the best thing we can do here is have more people exercising their own judgement. Not because unfortunately some people don't want to voice some of their opinions publicly.

I am not sure what you mean by "EA strategy". You mention funding, and I think it is fair to say that a lot of funding decisions are shaped by Berkeley ideas (but this is less clear to me regarding FTX regrantors).  But I argue the following have many "Berkeley" assumptions baked in: (1), (2), (3 - the idea that the most important conversations are conversations between EAs is baked into this), (4 - the idea that there exists some kind of secret consensus, the idea that this sort of thing is nearly always fat tail distributed is uncontroversial), (5 - "many" does a lot of work here, but I think most of the organizations you're talking about are in the area), (8- the idea that AI safety specific mentors are the best way to start getting into AI safety, not, say, ML mentors), (10 - leaving out published papers, arxiv).

I'm not saying that all of these ideas are wrong, just that they actually aren't accepted by some outside that community.

For this reason, this:

I claim that visiting an EA Hub is one of the best ways to understand what’s going on, engage in meaningful debates about cause prioritization, and receive feedback on your plans.

feels a little bit icky to me. That there are many people who get introduced to EA through very different ways and learn about it on their own or via people who aren't very socially influenced by the Berkeley community is an asset. One way to destroy a lot of the benefit of geographic diversity would be to get everyone promising to hang out in Berkeley and then have their worldview be shaped by that. 

"Icky" feels like pretty strong language.

I rather think it's sound advice, and have often given it myself. Besides it being, in my judgement, good from an impact point of view, I also guess that it has direct personal benefits for the advisee to figure out how people at hubs are thinking. It seems quite commonsensical advice to me, and I would guess that people in other movements give analogous advice.

I agree that all else equal, it's highly useful to know what people at hubs are thinking, because they might have great ideas, influence funding, etc.

However, I think a charitable interpretation of that comment is that it is referring to the fact that we are not perfect reasoners, and inevitably may start to agree with people we think are cool and/or have money to give us. So in some ways, it might be good to have people not even be exposed to the ideas, to allow their own uncorrelated ideas to run their course in whatever place they are. Their uncorrelated ideas are likely to be worse, but if there are enough people like this, then new and better ideas may be allowed to develop that otherwise wouldn't have been.

I used the word "icky" to mean "this makes me feel a bit sus because it could plausibly be harmful to push this but I'm not confident it is wrong".  I also think it is mostly harmful to push it to young people who are newly excited about EA and haven't had the space to figure out their own thoughts on deferring, status, epistemics, cause prio etc. 

I don't think the OP said anything about a Berkeley EA hub specifically? (Indeed, #3 talks about EA hubs,  so Akash is clearly not referring to any particular hub.) Personally, when I read the sentence you quoted I nodded in agreement, because it resonates with my experience living both in places with lots of EAs (Oxford, Nassau) and in places with very few EAs (Buenos Aires, Tokyo, etc.), and noticing the difference this makes. I never lived in Berkeley and don't interact much with people from that hub.

I think there's probably not that much we'd disagree on about what people should be doing and my comment was more of a "feeling/intuitions/vague uncomfortableness" thing rather than anything well-thought out because of a few reasons I might flesh out into something more coherent at some point in the future. 

Thanks for writing this up!

Like ThomasW, this also reads to me like the "Berkeley take on things" (which you do acknowledge, thanks) and if you live in a different EA hub, you'd say different things. Being in Oxford, I'd say many are focused on longtermism, but not necessarily AI safety, per se. 

Claim 2, in particular, feels a little strong to me. If the claim was "Many EA leaders believe that making AI go well is one of our highest priorities, if not the highest priority", I think this would be right.

I also think a true background claim I wish was here is "there are lots of EAs working on lots of different things, and many disagree with each other. Many EAs would disagree with several of my own claims here".

I think it's great that you did this, I hope it's really helpful to people who would otherwise have made decisions under false assumptions or felt bait-and-switched.

Yes I think this is really hard because some of these points are subjective "vibes" but in some ways that makes it more important!

That's a good point, that I agree with.

Separately, I think the criticisms in this thread are exaggerated. I don't think the post only captures "what a very, very small group of people in a very, very small community in Berkeley think".