I've been running EA events in San Francisco every other month, and often I will meet a recent graduate, and as part of their introduction they will explain to me why they are or aren't working on AI stuff.[1] For the EA movement to be effective in getting things done, we need to be able to
- identify new cause areas
- have diverse skillsets
- have knowledge of different global industries
I think you can gain knowledge that can help with these things at any job, by getting a deep enough understanding of your industry to identify what the most pressing problems are, and how someone might go about solving them. Richard Hamming's talk about how to do impactful work has a lot of good advice that is pretty much broadly applicable to any job. Cal Newport writes in So Good They Can't Ignore You that the most important factor for success and happiness is getting really good at what you do, since it gives you more job options so that you can find one where you have the autonomy to make an impact (book summary on Youtube).
Having effective altruists familiar with different global industries, such as
- food and beverage
- manufacturing
- agriculture
- electronics
- biology, chemistry, pharma
- supply chain
- physical infrastructure (everything from public transportation to cell towers to space shuttles)
- (insert other field that requires knowledge outside of computer desk work)
will help expand the tools and mechanisms the movement has to do good, and expand what the movement thinks is possible. For example, in the cause area of poverty, we want effective altruism to grow beyond malarial nets[2] and figure out how to get a country to go from developing to developed. This type of change requires many people on the ground doing many different things – starting businesses, building infrastructure, etc. The current pandemic might not meet the bar of being an extinction risk, but similar to an extinction risk, mitigating harm requires people with diverse skillsets to be able to do things like build better personal protective equipment, improve the cleanliness of air indoors, foster trust between people and public health institutions, and optimize regulatory bodies for responding to emergencies.
Effective altruism is actively looking for people who are really good at, well, pretty much anything. Take a look at the Astral Codex Ten grantees and you'll find everything from people who are working on better voting systems to better slaughterhouses. Open Philanthropy has had more targeted focus areas, but even then their range goes from fake meat to criminal justice reform, and they are actively looking for new cause areas.
It's OK to not go into AI, and there's no need to explain yourself or feel bad if you don't.
I even saw this post fly by that estimated that 50% of highly engaged young EA longtermists are engaged in movement building, many of which are probably doing so because they want to work in the area but don't feel technical enough. https://forum.effectivealtruism.org/posts/Lfy89vKqHatQdJgDZ/are-too-many-young-highly-engaged-longtermist-eas-doing ↩︎
Economic development much more effective than health-specific initiatives for improving quality of life: see Growth and the case against randomista development ↩︎
I strong upvoted this because:
1) I think AI governance is a big deal (the argument for this has been fleshed out elsewhere by others in the community) and
2) I think this comment is directionally correct beyond the AI governance bit even if I don't think it quite fully fleshes out the case for it (I'll have a go at fleshing out the case when I have more time but this is a time-consuming thing to do and my first attempt will be crap even if there is actually something to it).
I think that strong upvoting was appropriate because:
1) stating beliefs that go against the perceived consensus view is hard and takes courage
2) the only way the effective altruism community develops new good ideas is if people feel they have permission to state views that are different from the community "accepted" view.
I think some example steps for forming new good ideas are:
1) someone states, without a fully fleshed out case, what they believe
2) others then think about whether that seems true to them and begin to flesh out reasons for their gut-level intuition
3) other people pushback on those reasons and point out the nuance
4) the people who initially have the gut-level hunch that the statement is true either change their minds or iterate their argument so it incorporates the nuance that others have pointed out for them. If the latter happens then,
5) More nuanced versions of the arguments are written up and steps 3 to 5 repeat themselves as much as necessary for the new good ideas to have a fleshed out case for them.