I've been running EA events in San Francisco every other month, and often I will meet a recent graduate, and as part of their introduction they will explain to me why they are or aren't working on AI stuff.[1] For the EA movement to be effective in getting things done, we need to be able to

  • identify new cause areas
  • have diverse skillsets
  • have knowledge of different global industries

I think you can gain knowledge that can help with these things at any job, by getting a deep enough understanding of your industry to identify what the most pressing problems are, and how someone might go about solving them. Richard Hamming's talk about how to do impactful work has a lot of good advice that is pretty much broadly applicable to any job. Cal Newport writes in So Good They Can't Ignore You that the most important factor for success and happiness is getting really good at what you do, since it gives you more job options so that you can find one where you have the autonomy to make an impact (book summary on Youtube).

Having effective altruists familiar with different global industries, such as

  • food and beverage
  • manufacturing
  • agriculture
  • electronics
  • biology, chemistry, pharma
  • supply chain
  • physical infrastructure (everything from public transportation to cell towers to space shuttles)
  • (insert other field that requires knowledge outside of computer desk work)

will help expand the tools and mechanisms the movement has to do good, and expand what the movement thinks is possible. For example, in the cause area of poverty, we want effective altruism to grow beyond malarial nets[2] and figure out how to get a country to go from developing to developed. This type of change requires many people on the ground doing many different things – starting businesses, building infrastructure, etc. The current pandemic might not meet the bar of being an extinction risk, but similar to an extinction risk, mitigating harm requires people with diverse skillsets to be able to do things like build better personal protective equipment, improve the cleanliness of air indoors, foster trust between people and public health institutions, and optimize regulatory bodies for responding to emergencies.

Effective altruism is actively looking for people who are really good at, well, pretty much anything. Take a look at the Astral Codex Ten grantees and you'll find everything from people who are working on better voting systems to better slaughterhouses. Open Philanthropy has had more targeted focus areas, but even then their range goes from fake meat to criminal justice reform, and they are actively looking for new cause areas.

It's OK to not go into AI, and there's no need to explain yourself or feel bad if you don't.


  1. I even saw this post fly by that estimated that 50% of highly engaged young EA longtermists are engaged in movement building, many of which are probably doing so because they want to work in the area but don't feel technical enough. https://forum.effectivealtruism.org/posts/Lfy89vKqHatQdJgDZ/are-too-many-young-highly-engaged-longtermist-eas-doing ↩︎

  2. Economic development much more effective than health-specific initiatives for improving quality of life: see Growth and the case against randomista development ↩︎

Comments18
Sorted by Click to highlight new comments since: Today at 8:34 AM

I believe you left another important reason why it's okay not to go into AI: because it's okay to think that the risk of AI is wildly overblown. I'm worried that EA might be unwittingly drifting into a community where  AI skeptics feel unwelcome and just leave (or never join in the first place), which is obviously bad for intellectual discourse, even if you think they are wrong.  

As someone who is still in the process of developing a technical grasp around AI, yeah I honestly am a bit overwhelmed sometimes by the degree of focus on AI stuff (at least in my college age EA social circles) over bio and nuclear security and would love to deep-dive more into those areas but it seems like AI Safety where most of the visible opportunities are (at least for now...). 

I'm also wary of cargo-culting some of the AI risk arguments to newcomers as a community-builder when I don't necessarily understand everything myself from the ground up.

Yes! Also I suspect that people who think that AI is by far the most important problem might be more concentrated in the san Francisco bay area, compared to other cities with a lot of effective altruists, like London. Personally I think we probably already have enough people working on AI but I was worried about getting downvoted if i put that in my original post, so I scoped it down to something I thought everybody could get on board with (that people shouldn't feel bad about not working on AI)

I wonder how many other people are avoiding discussing their true beliefs about AI for similar reasons? I definitely don't judge anyone for doing so, there's a lot of subtle discouragements for disagreeing with an in-group consensus, even if none of it is deliberate or conscious. You might feel that people will judge you as dumb for not understanding their arguments, or not be receptive to your other points, or have the natural urge to not get into a debate when you are outnumbered, or just want to fit in/be popular. 

(It's also okay for non-students to not go into AI safety. :) )

I agree with the statement "It's OK for some people not to go into AI" but strongly disagree with the statement "It's OK not to go into AI, even if you don't have a good reason". One can list increasingly unreasonable statements like:

  1. AI safety is an important problem.
  2. Every EA should either work on AI safety or have a good reason why they're not.
  3. EAs should introduce themselves with what cause area they're working on, and their reason for not working on AI if applicable.
  4. Literally everyone should work on AI safety; there are no excuses not to.

I want to remind people of (1) and defend something between (2) and (3).

Our goal as world optimizers is to find the best thing we could possibly be doing, subject to various constraints like non-consequentialist goals and limited information. This means that for every notable action worth considering, we should have a good reason why we're not doing it. And (2) is just a special case of this, since working on alignment (technical or otherwise) is definitely a notable action. Because there are good reasons to work on AI safety, you need to have a better reason not to.

  • Having 100x more traction on the problem of making Malawi (0.25% of the world population) a developed country is not a good enough reason, because under most reasonable moral views, preventing human extinction is >100x better than raising the standard of living of 0.25% of the world population.
    • Note that there are many people who should not work on AI safety because they have >400x more traction on problems 400x smaller, or whatever.
  • Wanting to expand what EA thinks is possible is not a sufficient reason, because you also have to argue that the expected value of this is higher than investing into causes we already know about.
    • Holden Karnofsky makes the case against "cause X" here: AI risk is already really large in scale; they essentially say "this century we’re going to figure out what kind of civilization is going to tile the entire galaxy", and it's hard to find something larger in scale than that; x-risks are also neglected. It's hard for tractability differences to overwhelm large differences in scale/neglectedness.
  • Not having thought about the arguments is not a good enough reason. Reality is the judge of your actions and takes no excuses.
  • Majoring in something other than math/CS is not a good enough reason, because your current skills or interest areas don't completely determine your career comparative advantage
  • Finding the arguments for AI risk unconvincing is not a reason to just not work on AI risk, because if the arguments are wrong, this implies lots of effort on alignment is wasted and we need to shift billions of dollars away from it (and if they have nonessential flaws this could change research directions within alignment), so you should write counterarguments up to allow the EA community to correctly allocate its resources.
    • also, if working on alignment is your comparative advantage, it might make sense to work on even if the arguments have a 10% chance of being right.
  • Some potential sufficient reasons
    • "I tried 3 different kinds of AI safety research and was worse than useless at all of them, and have various reasons not to do longtermist community-building either"
    • "I have 100x more traction on biorisk and think biorisk is 20x smaller than AI risk"
    • "I have 100x more traction on making the entire continent of Africa as developed as the USA, plus person-affecting views, plus my AI timelines are long enough that I can make a difference before AGI happens"
    • "I think suffering-focused ethics are correct, so I would rather prevent suffering now than have a small chance of preventing human extinction"
    • "I can become literally the best X in the world, or a mediocre AI safety community-builder. I think the
    • "I have a really good story for why the arguments for AI risk are wrong and have spent the last month finding the strongest version of my counterarguments; this will direct lots of resources to preventing various moral atrocities in worlds where I am right"

edit: after thinking about it more I don't endorse the below paragraph

I also want to defend (3) to some extent. Introducing yourself with your target cause area and reasons for working on it seems like a pretty natural and good thing. In particular it forces you to have a good reason for doing what you're doing. But there are other benefits too: it's an obvious conversation starter, and when half the people at the EA event are working on AI safety it just carries a lot of information.

Upvoted for explaining your stance clearly, though I'm unclear on what you see as the further implications of:

Because there are good reasons to work on AI safety, you need to have a better reason not to.

This is true about many good things a person could do. Some people see AI safety as a special case because they think it's literally the most good thing, but other people see other causes the same way — and I don't think we want to make any particular thing a default "justify if not X".

(FWIW, I'm not sure you actually want AI to be this kind of default — you never say so — but that's the feeling I got from this comment.)

Note that there are many people who should not work on AI safety because they have >400x more traction on problems 400x smaller, or whatever.

When someone in EA tells me they work on X, my default assumption is that they think their (traction on X * assumed size of X) is higher than the same number would be for any other thing. Maybe I'm wrong, because they're in the process of retraining or got rejected from all the jobs in Y or something. But I don't see it as my job to make them explain to me why they did X instead of Y, unless they're asking me for career advice or something.

There may be exceptional cases where someone is working on something really unusual, but in those cases, I aim for a vibe of "curious and interested" rather than "expecting justification". At a recent San Diego meetup, I met a dentist and was interested to learn how he chose dentistry; as it turns out, his reasoning was excellent (and I learned a lot about the dental business).

Finding the arguments for AI risk unconvincing is not a reason to just not work on AI risk, because if the arguments are wrong, this implies lots of effort on alignment is wasted and we need to shift billions of dollars away from it (and if they have nonessential flaws this could change research directions within alignment), so you should write counterarguments up to allow the EA community to correctly allocate its resources.

This point carries over to global health, right? If someone finds EA strategy in that area unconvincing, do they need to justify why they aren't writing up their arguments?

In theory, maybe it applies more to global health, since the community spends much more money on global health than AI? (Possibly more effort, too, though I could see that going either way.)

Thanks for the good reply.

This is true about many good things a person could do. Some people see AI safety as a special case because they think it's literally the most good thing, but other people see other causes the same way — and I don't think we want to make any particular thing a default "justify if not X".

I'm unsure how much I want AI safety to be the default, there are a lot of factors pushing in both directions. But I think one should have a reason why one isn't doing each of the top ~10 things one could, and for a lot of people AI safety (not necessarily technical research) should be on this list.

When someone in EA tells me they work on X, my default assumption is that they think their (traction on X * assumed size of X) is higher than the same number would be for any other thing. Maybe I'm wrong, because they're in the process of retraining or got rejected from all the jobs in Y or something. But I don't see it as my job to make them explain to me why they did X instead of Y, unless they're asking me for career advice or something.

My guess is that the median person who filled out the EA survey isn't being consistent in this way. I expect that they could have a one-hour 1-1 with a top community-builder that makes them realize they could be doing something at least 10% better. This is a crux for me.

Separately, I do feel a bit weird about making every conversation into a career advice conversation, but often this seems like the highest impact thing.

If someone finds EA strategy in [global health] unconvincing, do they need to justify why they aren't writing up their arguments?

This was thought-provoking for me. I think existing posts of similar types were hugely impactful. If money were a bottleneck for AI safety and I thought money currently spent on global health should be reallocated to AI safety, writing up some document on this would be among the best things I could be doing. I suppose in general it also depends on one's writing skill.

My guess is that the median person who filled out the EA survey isn't being consistent in this way. I expect that they could have a one-hour 1-1 with a top community-builder that makes them realize they could be doing something at least 10% better. This is a crux for me.

I agree with most of this. (I think that other people in EA usually think they're doing roughly the best thing for their skills/beliefs, but I don't think they're usually correct.)

I don't know about "top community builder", unless we tautologically define that as "person who's really good at giving career/trajectory advice". I think you could be great at building or running a group and also bad at giving advice. (There are several ways to be bad at giving advice — you might be ignorant of good options, bad at surfacing key features of a person's situation, bad at securing someone's trust, etc.)

Separately, I do feel a bit weird about making every conversation into a career advice conversation, but often this seems like the highest impact thing.

I'm thinking about conversations in the vein of an EAG speed meeting, where you're meeting a new person and learning about what they do for a few minutes. If someone comes to EAG and all their speed meetings turn into career advice with an overtone of "you're probably doing something wrong", that seems exhausting/dispiriting and unlikely to help (if they aren't looking for help). I've heard from a lot of people who had this experience at an event, and it often made them less interested in further engagement.

If I were going to have an hour-long, in-depth conversation with someone about their work, even if they weren't specifically asking for advice, I wouldn't be surprised if we eventually got into probing questions about how they made their choices (and I hope they'd challenge me about my choices, too!). But I wouldn't try to ask probing questions unprompted in a brief conversation unless someone said something that sounded very off-base to me.

I disagree with 2) because I think the movement will be able to get more done with more diverse backgrounds of people who are really good at different things. Even if AI is the most important thing, we need people who understand communications, policy, organizing grassroots movements, and also people who are good at completely unrelated fields who can understand the impact of AI on their field (manufacturing, agricuture, shipping logistics, etc) though there aren't those opportunities to do that work directly in AI right now.

I strong upvoted this because:
1) I think AI governance is a big deal (the argument for this has been fleshed out elsewhere by others in the community) and 
2) I think this comment is directionally correct beyond the AI governance bit even if I don't think it quite fully fleshes out the case for it (I'll have a go at fleshing out the case when I have more time but this is a time-consuming thing to do and my first attempt will be crap even if there is actually something to it). 

I think that strong upvoting was appropriate because:
1)  stating beliefs that go against the perceived consensus view is hard and takes courage
2) the only way the effective altruism community develops new good ideas is if people feel they have permission to state views that are different from the community "accepted" view. 

I think some example steps for forming new good ideas are:
1) someone states, without a fully fleshed out case, what they believe
2) others then think about whether that seems true to them and begin to flesh out reasons for their gut-level intuition
3) other people pushback on those reasons and point out the nuance
4) the people who initially have the gut-level hunch that the statement is true either change their minds or iterate their argument so it incorporates the nuance that others have pointed out for them. If the latter happens then,
5) More nuanced versions of the arguments are written up and steps 3 to 5 repeat themselves as much as necessary for the new good ideas to have a fleshed out case for them. 

There seems to be a "intentions don't matter, results do" lesson that's relevant here. Intending to solve AI alignment is secondary, and doesn't mean that you're making progress on the problem.

And we don't want people saying "I'm working on AI" just for the social status, if that's not their comparative advantage and they're not actually being productive.

Yes that's exactly it! Even if a lot of people think that AI is the most important problem to work on, I would expect only a small minority to have a comparative advantage. I worry that students are setting themselves up for burnout and failure by feeling obligated to work on what's been billed as some as the most pressing/impactful cause area, and I worry that it's getting in the way of people exploring with different roles and figuring out and building out their actual comparative advantage

I've been running EA events in San Francisco every other month, and often I will meet a recent graduate, and as part of their introduction they will explain to me why they are or aren't working on AI stuff.

The other day, I had my first conversation ever where someone explained why they weren't sure about going into AI, unprompted. I said something like "no need to justify yourself, EA is a big tent", which felt like the obvious thing to say (given all my experiences in the movement, meeting people who work on a dozen different problems). If some groups have atmospheres where AI self-justification feels important,  that seems bad.

(Though I think "explaining why you work on X" is very different than "explaining why you don't work on X, not so much"; the former seems fine/natural.)

*****

Related: an old post of mine on why being world-class at some arbitrary thing could be more impactful than being just okay at a high-priority career.

That post is way too long, but in short, benefits to having a diverse set of world-class people in EA include:

  • Wide-ranging connections to many different groups of people (including skilled people who can contribute to valuable work and successful people who have strong networks/influence)
  • EA being a more interesting movement for the people in it + people who might join

I think this is directionally right relative to my impression of the attitudes on the ground in today's effective altruism local groups. I also directionally agree with Thomas Kwa's pushback below relative to your post. 

EDIT:
If you don't think that more people should go into AI as a community-builder, you should not indicate you think people need a reason to not go into AI! 

People who don't have their own views should feel very comfortable saying that it's fine to not have a view yet! 

More detail in the footnote. [1]
END OF EDIT

My actual overall take is that I think it is good that more people are taking the idea of going into AI seriously.  I also think it's incredibly important that the conclusion doesn't feel pre-written because that is so counter-productive to attracting the most inquisitive and curious minds who do feel like there are plenty of reasonable objections.

 The less people feel free to explore the objections, the more we accidently select for people who are willing to believe stuff without thinking hard about it themselves. The more people feel like they belong in this group of people regardless of whether they end up coming to a particular conclusion, the healthier our local groups' epistemics are likely to be. People need to feel free to accept "AI" and "not-AI" to think clearly about this. 

Huge amounts of social pressure to come to a certain conclusion about the state of the world is bound to end in more yes-people and less people who are deeply curious about how to help others the most.

It is challenging to push certain "most promising candidates" while still making everyone else who thinks really hard and has a nuanced understanding of effective altruism and decides "I've thought really hard about it and that/none of those seem like my best option" to feel they fully belong in an effective altruism local group. 

  1. ^

     If you think AI is worth mentioning anyway/it's worth people you talk to thinking more seriously about it for other reasons, I think it is good to be upfront about exactly what you think and why. 

    Example of how to be upfront and why I think this is so important in community building 

    EG. "I honestly haven't formed a view on AI yet. I think it's worth mentioning as something worth looking into anyway. The reason I think this is because people who I think generally have robust reasons for thinking what they do who I agree with on other topics I've thought about more, think AI is a big deal.  This makes me suspect I'll end up thinking AI is important even if I don't fully buy into the arguments at the moment.". 

    This community builder could then go onto discuss exactly who they respect and what idea they have that made them respect them. This then brings the conversation to things the community builder in question actually believes and they can passionately and inspirationally discuss that topic. 

     I think great community building conversations I've had with people new to the community happen when:

    1. I change my mind;

    2. they change their mind;

    3. we realise we actually agree but were using different words to express it (and now both of us have more language to express our views to a wider range of people). 

     If I am summarising someone else's view and I don't make that clear, it is very hard for me to move the conversation onto something where one of the three above things happen. If neither I nor the person I'm talking to has fleshed out views on something (because I'm deferring and they have never thought about it before), this is much less likely to be the sort of conversation that builds more nuance into either of our views on the topic. 

    My initial mistake
    My mistake initially when writing this comment was forgetting that I currently have enough views on AI (but I didn't always) to have these sorts of conversations so I should engage with people's reasoning and encourage them to flesh out their views if someone says this to me. 

    I still think that I should absolutely give people permission to take time to come to a view and not have a reason other than "I haven't thought about it enough to have a confident view" and that I also should not force people to talk about AI if they don't want to!

    Just if they're bringing it up, which is what is happening in the context of this post, then I think it's good for me, in that situation, to engage with their chosen topic of conversation with their thinking and my thinking to see if we can start charting out the common ground and figuring out what the points of disagreement are.

    When I didn't have my own views on AI, I believed it was worth mentioning because other people thought it was worth mentioning. I hope I was able to be upfront about this when talking about AI but I know that memory paints a rosier picture in retrospect. 

    I can imagine myself having glossed over why I didn't go into detail on AI when I didn't have detailed views because of cultural pressures to appear like effective altruism, and me as its representative, have everything figured out. I think that not being upfront about not buying into the arguments would have been a bad way of handling it because then it makes it seem like I buy into it and can't make a nuanced case for it! 

     There are certainly components of the AI case that I don't have views on and if we hit those I think it's good for me to be really upfront. I also have quite shallow views on biorisk, extreme climate change and nuclear weapons (but really in-depth views about a bunch of other topics).  I think that it is very hard to develop your own view on every topic and it takes a lot of time -- I think deferring to people who I think think well is often necessary, but it is simply important for me, and for us all, to be as upfront as possible. Being upfront about when we're deferring and when we buy-in to the arguments for something is so helpful for community building conversations to go as well as they possibly can go.   

Holden Karnofsky also basically says "be so good they can't ignore you" in the 80k podcast episode interviewing him on his career thoughts (as the tile of the episode suggests, the advice was basically "build aptitudes and kick ass"). 

From memory, he also said something like, for most people, going into AI straight away instead of just becoming really good at something you could be really good at is probably a mistake: having said this, not sure if his views have changed given we had a bunch of really quick developments in AI that made a bunch of people think AI timelines were way shorter.

I actually love that you didn't just cite random EA stuff only. I think citing more outside sources for things is really good for keeping effective altruism and the rest of the world's discourse on how to make the world better more connected (both for making it easier for newcomers by having more material grounded in language that more people understand and presented in ways that are way more familiar, but also for the object-level advantages of just keeping our epistemics cleaner because we're a little bit less in our echo chamber).   

I just also thought it was worth pointing out that people in the EA community that people respect a lot seem to completely agree with you too (power of social-proof from people within our in-group is totally a thing that makes citing stuff from outside the community receive too little social reinforcement IMO).

I sometimes introduce myself by apologetically explaining why I am working on AI safety given that so many other people are seemingly doing the same. The general lesson is that we should just do what we think needs to get done without feeling a need to apologise for it. Virtue points to anyone working on anything that they think other people will think they aren't qualified for. Virtue points to those who feel peer-pressured to work on something but choose a different path.

Strongly agree. Thanks for writing this!

Curated and popular this week
Relevant opportunities