I recently attended EAGxSingapore. In 1-1s, I realized that I have picked up a lot of information from living in an EA hub and surrounding myself with highly-involved EAs.
In this post, I explicitly lay out some of this information. I hope that it will be useful for people who are new to EA or people who are not living an EA Hub.
Here are some things that I believe to be important “background claims” that often guide EA decision-making, strategy, and career decisions. (In parentheses, I add things that I believe, but these are "Akash's opinions" as opposed to "background claims.")
Note that this perspective is based largely on my experiences around longtermists & the Berkeley AI safety community.
General
1. Many of the most influential EA leaders believe that there is a >10% chance that humanity goes extinct in the next 100 years. (Several of them have stronger beliefs, like a 50% of extinction in the next 10-30 years).
2. Many EA leaders are primarily concerned about AI safety (and to a lesser extent, other threats to humanity’s long-term future). Several believe that artificial general intelligence is likely to be developed in the next 10-50 years. Much of the value of the present/future will be shaped by the extent to which these systems are aligned with human values.
3. Many of the most important discussions, research, and debates are happening in-person in major EA hubs. (I claim that visiting an EA Hub is one of the best ways to understand what’s going on, engage in meaningful debates about cause prioritization, and receive feedback on your plans.)
4. Several “EA organizations” are not doing highly impactful work, and there are major differences in impact between & within orgs. Some people find it politically/socially incorrect to point out publicly which organizations are failing & why. (I claim people who are trying to use their careers in a valuable way should evaluate organizations/opportunities for themselves, and they should not assume that generically joining an “EA org” is the best strategy.)
AI Safety
5. Many AI safety researchers and organizations are making decisions on relatively short AI timelines (e.g., artificial general intelligence within the next 10-50 years). Career plans or research proposals that take a long time to generate value are considered infeasible. (I claim that people should think about ways to make their current trajectory radically faster— e.g., if someone is an undergraduate planning a CS PhD, they may want to consider alternative ways to get research expertise more quickly).
6. There is widespread disagreement in AI safety about which research agendas are promising, what the core problems in AI alignment are, and how people should get started in AI safety.
7. There are several programs designed to help people get started in AI safety. Examples include SERI-Mats (for alignment research & theory), MLAB (for ML engineering), the ML Safety Scholars Program (for ML skills), AGI Safety Fundamentals (for AI alignment knowledge), PIBBS (for social scientists), and the newly-announced Philosophy Fellowship. (I suggest people keep point #6 in mind, though, and not assume that everything they need to know is captured in a well-packaged Program or Reading List).
8. There are not many senior AIS researchers or AIS mentors, and the ones who exist are often busy. (I claim that the best way to “get started in AI safety research” is to apply for a grant to spend ~1 month reading research, understanding the core parts of the alignment problem, evaluating research agendas, writing about what you’ve learned, and visiting an EA hub).
9. People can apply for grants to skill-up in AI safety. You do not have to propose an extremely specific project, and you can apply even if you’re new. Grant applications often take 1-2 hours. Check out the long-term future fund.
10. LessWrong is better than the EA Forum for posts/discussions relating to AI safety (though the EA Forum is better for posts/discussions relating to EA culture/strategy)
Getting Involved
11. The longtermist EA community is small. There are not tons of extremely intelligent/qualified people working on the world’s most pressing issues. There is a small group of young people with relatively little experience. We are often doing things we don’t know how to do, and we are scrambling to figure things out. There is a lot that needs to be done, and the odds that you could meaningfully contribute are higher than you might expect. (See also Lifeguards)
12. Funders generally want to receive more applications. (I think most people should have a lower bar for applying for funding).
13. If you want to get involved but you don’t see a great fit in any of the current job openings, consider starting your own project (get feedback and consider downside risks, of course). Or consider reaching out to EAs for ideas (if you're interested in longtermism or AI safety, feel free to message me).
I am grateful to Olivia Jimenez, Miranda Zhang, and Christain Smith for feedback on this post.
This post is mostly making claims about what a very, very small group of people in a very, very small community in Berkeley think. When throwing around words like "influential leaders" or saying that the claims "often guide EA decision-making" it is easy to forget that.
The term "background claims" might imply that these are simply facts. But many are not: they are facts about opinions, specifically the opinions of "influential leaders"
Do not take these opinions as fact. Take none for granted. Interrogate them all.
"Influential leaders" are just people. Like you and I, they are biased. Like you and I, they are wrong (in correlated ways!). If we take these ideas as background, and any are wrong, we are destined to all be wrong in the same way.
If you can, don't take ideas on background. Ask that they be on the record, with reasoning and attribution given, and evaluate them for yourself.
I agree with most of what you are saying.
However, the post seemed less self-aware to me than you are implying. My impression from interacting with undergraduates especially, many of whom read this forum, is that "these cool people believe this" is often read as "you should believe this." (Edit: by this paragraph I don't mean that this post is trying to say this, rather that it doesn't seem aware of how it could play into that dynamic.)
Thus I think it's always good practice for these sorts of posts to remind readers of the sort of thing that I commented, es... (read more)