Hello! I'm a fourth-year undergrad in the United States majoring in computer science and I'm graduating after next semester. I'm also a group organizer for an EA university group which I started this semester. I've been surprised by how straightforward it was for us to start an EA group and, through advertising and running an introductory EA fellowship, help dozens of students this semester learn about EA and become convinced of the core ideas. I'm excited about the idea of starting additional EA university groups and I think that would be pretty valuable (and apparently someone at Open Philanthropy thinks high-quality community-building would be better than donating a whole lot of money). It seems that other sorts of community-building could be pretty high-impact as well, although I'm not that familiar with them.
My current job plans are: do software engineering for two years (I currently have a new grad software engineering offer with a unicorn, which I'm pretty keen on), and then transition into EA community-building (or maybe AI safety engineering or something, depending on what seems higher-impact). This post isn't under my real name in case said company is browsing the EA Forum and recognizes my name (would be cool if they were browsing the EA Forum, but I'm not gonna risk it).
Another option is to jump straight into EA community-building after graduation. It sounds like the Centre for Effective Altruism will shortly have a lot of roles open for university campus community-building (honestly sounds like they're desperately trying to find people so that sounds like some nice counterfactual impact), and there are also opportunities from Redwood Research and Lightcone Infrastructure.
Some reasons to do software engineering, at least for a while:
- Build career capital that is legible outside the EA community, at a company that's reasonably prestigious (at least within tech). I think this increases the options that I have available and also makes my later work doing random things like EA community-building look more reputable.
- Build career capital relevant for things like software engineering or machine learning engineering for AI safety.
- Take advantage of new grad opportunities. A lot of non–new grad software engineering opportunities require a year or two of industry, non-internship experience, and new grad opportunities are generally limited to people who graduated in the past year. Although I don't think it would be impossible for me to get a decent software engineering job later on with just internship options, I think my options would be significantly limited.
- Not raise a ton of eyebrows from my family.
Reasons to jump straight into community-building:
- Get in some more person-hours of EA community-building time in before transformative AI comes(?). I think this is actually a pretty important consideration—I don't mean to meme it too hard.
- Work hours would probably be pretty flexible and I do like to take naps in the middle of the day
I'd like to choose whichever option is higher-impact. What are your thoughts? In terms of what impact means to me, I think my cause prioritization and values are about the same as 80,000 Hours and the Centre for Effective Altruism. I'll also be applying for advising from 80,000 Hours but I wanted to hear some thoughts from the broader EA community.
I don't feel more than 50% confident that I want to do a community-building or meta-EA career path. Community-building might be over-represented in the career plan I sketched out above, as until a few days ago, I hadn't seriously considered the idea that software engineering or machine learning engineering for AI safety were things that I could be qualified to contribute to.
(Previously, I thought that AI safety ML engineering positions were exceedingly competitive, and I didn't know that software engineering for AI safety was much of a thing. My update was from browsing the Anthropic job board and noticing the lack of hard requirements for many of these positions, and from people on the EA Corner Discord thinking it was doable to get an AI safety position. I also re-read "AI Safety Needs Great Engineers" with a fresh perspective. The first time I read it, I was thinking, "wow, I have no idea how to write a substantial pull request to a major machine learning library; therefore, I can't work in AI safety". The second time I read it, I paid more attention to the sentence "Based on the people working here already, 'great software engineer' and 'easy to get on with' are hard requirements, but the things in the list above are very much nice-to-haves, with several folks having just one or none of them.")