Hide table of contents

Note: I wrote a version of this for the pre-EAG university group organizer summit and found it helpful to lay out my messy thoughts. I thought it might be helpful to share with more people so I made this version. It isn’t up to my standards for mass consumption (especially in some unsubstantiated claims) but I thought it would be better to share than not.

 

If I were an EA group organizer right now I would be pretty confused about whether I should pivot most of my attention toward an AIS specific group[1]. To be honest, I am fairly confused about the value of EA community building versus cause-specific community building in the landscape that we are now in (especially with increased attention on AI)[2]. I think both are plausibly very high value.

 

In this, I tried to lay out some key considerations for thinking about what you personally should focus on. This is meant to complement Kuhan’s post and shouldn’t really stand on its own. Since Kuhan’s is a bit more positive on AIS groups, I wrote this with more of a lens of why you might not choose to run an AIS group.  I’ll also note that this is targeted at people who think AIS is plausibly the most significant problem to work on and doesn’t make any arguments as to why that may or may not be true. I also want to add a disclaimer that while some of these points gesture at EA CB as a means to AIS CB, I generally think it is bad to build an EA group with intentions of convincing people to go into certain career paths rather than helping introduce new ideas and connect people to opportunities. I have said more on this here and here.

 

So the decision is "Where on the continuum of pure EA to pure AIS do I want to be[3]" and some factors you could weigh include: Personal fit, Track records, Keeping options open, and Risks. I will go into each of these with a bit more detail.

Personal fit

  • Overall argument:
    • (I think) Not everyone who is a great EA CBer would make a great AIS group CBer, especially on the margin.
    • It is worth noting that deciding not to run an AIS group doesn’t mean your school won’t have one. Through your EA CB you might find someone who would be better suited to run the AIS group.
    • I think you should do the thing you will be excellent at within reason
      • In particular, this feels true given that EA groups do also produce AIS people. So it is plausible that running an excellent EA group would produce more of these folk than running a mediocre AIS group.[4] 
  • Things that seem possibly more important for AIS organizers than EA organizers (focused on technical rather than policy and most of these are stolen from Kuhan)
    • Strong object-level knowledge of AIS is the big difference that comes to mind
      • Or the prospect of developing this feels exciting
    • (similarly) Baseline high levels of interest in AI alignment
      • Being motivated by EA and impact considerations doesn’t naturally translate to being “nerdsniped” by ML
        • Which might be helpful for engaging with those who are
    • (Maybe) More credibility or ability to engage with academics
      • Professor engagement might be a good idea for AIS groups because AIS is such a hot topic. So, being able to develop these connections and being really knowledgeable is important for good impressions and networking with them.
  • But you might have more fit than you think
    • You can develop gusto
      • People can skill up/change what they get excited about
        • ie: I used to think I should focus on ETG CB rather than careers or longtermist CB and updated on that
    • Maybe you are a better fit for a governance-focused group
    • A lot of work to improve AIS groups doesn’t require AI knowledge (just make sure emails are collected and things go out on time etc. groups function and run well - but you do need people who can facilitate discussions and create social community/do research projects)

Track Records

  • EA groups have a pretty good track record
    • Almost everyone at the summit had been impacted by an EA group
    • You might think these groups are too good to throw away and that the next best person to run yours would have less marginal impact than what you would get by shifting
  • EA groups particularly have a decent track record for getting people who think very carefully about problems
    • If you want those sorts of people running AIS groups, this also would strengthen the case for EA groups to stick around as they can be a good place to recruit from.
  • Right now we have some AIS group success stories. But those groups are:
    • About a year old (might not last)
      • Since we have fair evidence of EA groups lasting, it seems more likely that they will continue lasting.
    • At top universities (might not generalize)
    • Run by very aligned and talented organizers (might not stay value aligned with next gen of organizers)
    • Given a very large number of resources (might not be as cost-effective)
  • Historically, certain types of groups default to crowding out others.
    • Ie: there used to be 80k groups and GWWC groups but those were almost all crowded out by EA groups
    • If you don’t think you want AIS groups to totally crowd out EA groups, there may be more of a case for putting effort into preventing this

Keeping options open

  • You may also buy into arguments for worldview diversification and don’t want to put all your eggs in one basket
    • You may think a cause x will pop up or believe EA principles are robust across very plausible worldviews
  • Given how quickly our understanding of bottlenecks shifts, this could be valuable
    • It is plausible that certain types of talent that we might need might be more interested in the EA framing than a direct AIS framing
  • But on the other hand, many successful movements had a specific problem they focused on rather than a general ideology

Risks

  • It seems plausible that it is easier to do harm with an AIS group than an EA group such as by
    • Getting people working on transformative AI safety accidentally (or inevitably) causes more capabilities research
      • Nuance - specifically I think they might cause more capabilities research on AGI or transformative AI rather than narrow AI
        • AI is being hyped up, and students know about it, but it is unclear whether they think the world will look totally different in 20 years
          • So they might be hyped to work on AI but not AGI?
        • My current guess is that AI is already hyped so there isn’t that much counterfactual capabilities advancement[5] - there are stronger market pressures influencing AI capabilities than AIS groups (but again not sure if AGI holds here)
    • Being more annoying/less convincing to professors who want to hear about AIS
      • Students could plausibly lessen the credibility of the field
  • Good epistemics are less naturally baked into AIS than EA groups
    • Not that I think EA groups are superb at these but I do think they are much better than most populations
      • And if EA groups aren’t great at this even with it baked into the curricula, that makes me a bit more worried about curricula without it baked in
  • (weakest) Maybe they just make less sense?
    • At least to do them in the way that we currently conceive of student groups.
    • If I think about these things and back chain into how we get more of them it is something like
      • Theorists -> Global talent search, talent programs, specific recruiting
      • Senior Researchers -> Recruiting from PHD programs, maybe workplace groups or outreach to profs/academics
      • Project managers -> consulting network/workplace groups/training/recruiting
    • I don’t really back chain to (undergrad, social) student groups?[6] 
      • But maybe more like the things HAIST and MAIA are doing
        • So a crux might be how good you think you would be to run a research program
    • EA, on the other hand, is a set of principles, a way of thinking about how to do good. It is a question/mission/investigation of how we can use our resources to do the most good. There are so many students asking how they can make a difference with their careers and very few resources helping them do this. On a more extreme level - there are many students who buy into stronger arguments. Arguments are more focused on doing /the most/ good - they have an insanely strong desire to help others - and as many others as possible - this is more of an identity/more of a worldview/more of a way of life — which makes social connections really important[7].
      • Anyways, I am more convinced that EA groups in the way we currently conceive of them make sense in my TOC world.

 

 

Appendix: My extremely rough guess about what we might want to see the total landscape to look like in 1-2 years (because I was told I should share it)

  • The top 5-15 schools for AI have very well-run/developed AIS groups. They have part-full time people working on them, work with academia/grad students/professors, and focus on building skills and training/pipelining talent.
  • These schools also have EA groups. EA groups serve as a node for working on cause prioritization. They have (ideally very) high epistemics and are not focused on recruiting as many people as possible. They focus on people who deeply care about the world and how to do good in it.
  • I want people to keep ties with this node in case their cause prioritization changes
  • UGAP style EA group creation still exists, EA groups still exist and need someone championing them
  • Catches more counterfactual people who care deeply about EA ideas

 

  1. ^

     And in fact, as an organizer of organizers I am also confused about this (ie: I have a lot of on the ground experience with an EA group but maybe I should go get some with an AIS group so I can help there)

  2. ^

     I am also confused about whether we should focus on uni students or people further along in their careers. Personally I think that EA should put more effort than now into later career professionals but we should still target uni groups and this is an especially good option when you are university aged.

  3. ^

     Note that I say spectrum since shifting towards more cause-specific CB =/ only doing cause-specific CB.

  4. ^

     But this is a bit handwavey and I don’t know if we have sufficient evidence backing this (but I lean towards it)

  5. ^

    Or at least, I think AI labs are not having trouble hiring for capabilities researchers so the marginal capabilities researcher produced by the AIS group is probably fine unless that person is particularly brilliant. In which case we either want that person working on safety or not working on this at all (maybe go ETG)

  6. ^

     Although I should flag that needs change fairly rapidly - we might find use for a bunch of junior researchers soon - and already might have more junior positions available in policy

  7. ^

     There is also big tent EA broad movement/effective giving - seems good to have effective giving be its own meme. I don’t actually think students are the best audience for this but maybe - but it also seems a bit easier/I am happy to just have others do it.

43

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities