Hi, I'm an 18 year old going into college in a week. I am studying Computer engineering and mathematics. Since I have a technical interest and AGI has a much higher probability ending humanity this century(1/10, I think) than other causes (that I would rather work on, like Biorisks is 1/10,000), would the utility positive thing to do be to force myself to get an ML alignment focused PhD and become a researcher?

I am at a mid-tier university. I think I could force myself to do AI alignment since I have a little interest, but not as much as the average EA. I wouldn't find as much engagement in it, but I also have an interest in starting a for-profit company, which couldn't happen with AGI alignment (most likely). I would rather work on a hardware/software combo for virus detection (Biorisks), climate change, products for 3rd world, other current problems, or other problems that will be found in the future.

Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?

Edit: made some people confused that I had a false dichotomy between "pursuing my passion" and doing EA alignment. Removed that comment.

19

0
0

Reactions

0
0
New Answer
New Comment

9 Answers sorted by

Hey! I think this is a good question that a lot of people have.

I don't think either extreme is a good idea. I wouldn't recommend simply "following your passion", but I also wouldn't recommend "forcing" yourself to do the job that looks best in expectation.

In reality, the best job for you will probably be something you have at least some interest in, especially if it's going to be self-directed theoretical work! But I'd bet there are a lot of things you're interested in, or could be interested in... :)

I'd recommend trying lots of different areas over the next few years, through courses or reading or internships or conversations or summer jobs, and trying to find a few different areas that interest you. Ideally they'd be areas you can also help a lot of people!

I also think it's totally okay to be uncertain about what you're going to work on long-term until your late 20s or even later, so you shouldn't worry about following an interest and later on changing your mind.

I hope that helps and I'd love to see an update in a year or two!

I wasn't proposing that "follow your passion" was the other idea I was going with. I do think that some combination of personal interest and external importance will probably be highest utility for a given personality. I just wanted to make sure that AGI alignment wasn't so great that I would have to practically throw away my feelings for humanity's existence. I have also read a recent post questioning the basis of the "recommended careers" in EA and 80k hours (post). Thanks for the post!

Why do you think you'd need to "force yourself?" More specifically, have you tested your fit for any sort of AI alignment research?

If not, I would start there! e.g., I have no CS background, am not STEM-y (was a Public Policy major), and told myself I wasn't the right kind of person to work on technical research ... But I felt like AI safety was important enough that I should give it a proper shot, so I spent some time  coming up with ELK proposals, starting the AGISF curriculum, and thinking about open questions in the field. I ended up, surprisingly, feeling like I wasn't the most terrible fit for theoretical research!

If you're interested in testing your fit, here are some resources

I could possibly connect you to people who would be interested in testing their fit too, if that's of interest. In my experience, it's useful to have like-minded people supporting you!

Finally, +1 to what Kirsten is saying - my approach to career planning is very much, "treat it like a science experiment," which means that you should be exploring a lot of different hypotheses about what the most impactful (including personal fit etc.) path looks like for you.

edit: Here are also some scattered thoughts about other factors that you've mentioned:

  • "I also have an interest in starting a for-profit company, which couldn't happen with AGI alignment (most likely)."
    • FWIW, the leading AI labs (OpenAI, Anthropic, and I think DeepMind) are all for-profits. Though it might be contested how much they contribute to safety efforts, they do have alignment teams.
  • "would the utility positive thing to do be to force myself to get an ML alignment focused PhD and become a researcher?"
    • What do you mean by "utility positive" - utility positive for whom? You? The world writ large?
    •  
  • "Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?"
    • I don't think anyone can answer this besides you. : ) I also think there are at least three questions here (besides the question of what you are good at/what you like, which imo is best addressed by testing your fit)
      • How important do you think AI alignment is? How confident are you in your cause prioritization?
      • How demanding do you want EA to be?
      • How much should you defer to others?
         

I agree with most points raised in the answers so far. Some specific points I feel worth mentioning:

  • I think 1/10,000 probability of biorisk this century is too low.
    • I'm more certain of this if you define biorisk as something like "global catastrophic biorisk that is causally responsible for dooming us" than if you think of it as "literally extinction this century from only bio"
  • I think it's probably good for you to explore a bunch and think of interests you have in specific tasks rather than absolute interests in AI alignment vs biorisk as a cause area.
  • "Follow your gut (among a range of plausible options) rather than penciled Fermis" makes gut sense but pencils badly. I don't have a strong robust answer to how to settle this deliberation.
  • Fortunately exploring is a fairly robust strategy under a range of reasonable assumptions.

Hi Isaac! We're in a similar situation: I'm 19, studying Computer Science at a mid-tier university, with a strong interest in AI alignment (and EA in general). Have you gone through the 80,000 Hours career guide yet? If not, it should give you some clarity. It recommends that we just focus on exploration and gaining career capital right now, rather than choosing one problem area or career path and going the whole hog.

I have been through the 80,000 career guide before. I want to be a for-profit startup founder in an area  with a I can move over to CS and Math double major early and focus on ML and ML related research in undergrad. Since it is competitive, especially at top schools, knowing early is better than later. I mostly agree, but disagree a little bit with some of the career capital advice. I think they speak a little too broadly about transferable skills. I don't think I should spend the first 5-10 years of my life tangently related to what I want to do to ... (read more)

4
aogara
2y
I like the idea of building skills early in your career. I would specifically highlight being opportunistic — individual people run into unusual opportunities to do difficult and valuable things, and especially early in your career it’s worth doing them. This is mostly based on my n=1 personal experience, I wrote a little more about it here: https://forum.effectivealtruism.org/posts/AKLMZzsTc2ghxohhq/how-and-when-should-we-incentivize-people-to-leave-ea?commentId=w6RJ7L6H4nrJhFyAM I’m now working on AI safety and I’ve been really happily surprised at the opportunities available to people trying to get into the field. There are courses (AGISF, ML Safety Scholars), internship / research opportunities, various competitions, and EAF + LW to showcase public writing. I basically worked on non-EA stuff for three years while reading about AI Safety for fun and ended up pretty well positioned to work on it.
2
Kirsten
2y
Agreed on your perspective on career capital

Hi Isaac, I agree with many other replies here. I would just add this:

I think AI alignment research could benefit from a broader range of expertise, beyond the usual 'AI/CS experts + moral philosophers' model that seems typical in EA approaches.

Lots of non-AI topics in computer science seem relevant to specific AI risks, such as crypto/blockchain, autonomous agents/robotics, cybersecurity, military/defense applications, computational biology, big data/privacy, social media algorithms, etc. I think getting some training in those -- especially the topics best aligned with your for-profit business interests -- would position you to make more distinctive and valuable contributions to AI safety discussions. In other words, focus on the CS topics relevant to AI safety that are neglected, and not just important and tractable.

Even further afield, I think cases could be made that studying cognitive science, evolutionary psychology, animal behavior, evolutionary game theory, behavioral economics, political science, etc. could contribute very helpful insights to AI safety -- but they're not very well integrated into mainstream AI safety discussions yet.

Humans seem to be notoriously bad at predicting what will make us most happy, and we don’t realize how bad we are at it. The typical advice "Pursue your passion" seems like a bad advice since our passion often develops parallel to other more tangible factors being fulfilled. I think 80,000 Hours' literature review on "What makes for a dream job" will help you tremendously in better assessing whether you would enjoy a career in AI alignment. 

A better question would be my effectiveness rather than my passion. How much more effective would I have to be in other cause areas and low in alignment related skills  such that it would be good to select other activities (or vis versa)? Over time, I can keep asking the questions with new information.

1
Per Ivar Friborg
2y
Ahh yes this is also a good question which I don't have a good answer to, so I support your approach in revisiting this question over time with new information. With very low confidence, I would expect that there would become more ways to aid with AGI alignment indirectly as the space grows. A broader variety of ways to contribute to AGI alignment would then make it more likely for you to find something within that space that matches your personal fit. Generally speaking, examples of ways to indirectly contribute to a cause could be something like operations, graphic design, project management, software development, and community building. My point is that there are likely many different ways to aid in solving AGI alignment, which increases the chances of finding something you have the proper skills for. Again, I place very low confidence on this since I don't think I have an accurate understanding of the work needed within the space of AGI alignment at all. This is more meant as an alternative way of thinking about your question. 

I think it's too early to decide specifically what you're going to work on for your career. I would just put your head down and focus mostly on learning math for a couple years then have a rethink once you're a 3rd year or so. As long as you have good math and CS skills, you will have many options later.

You ask, "Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?"

One argument against pursuing AI alignment is that it's very unlikely to work.   So long as humans are in any way involved with AI, weaknesses of the human condition will be a limiting factor which will prevent AI from ever being a safe technology.   

If I was in your position, smart, educated, with a long life ahead of me, and really wanted to have a meaningful impact,  I would focus on the machinery which is generating all these threats, the knowledge explosion.

From the perspective of almost old age, I would advise you not to follow the "experts" who are focused on the effort to manage the products of an ever accelerating knowledge explosion one by one by one, as that effort is doomed to failure.   

Perhaps you might consider this thought experiment.  Imagine yourself working at the end a factory assembly line.  The products are coming down the line to you faster, and faster, and faster.   You can keep up for awhile by working hard and being smart, but at some point you will be overwhelmed unless you can take control of the assembly line, and slow it to a pace you can manage.    

That's the challenge which will define the 21st century.   Will we learn how to control the knowledge explosion?  Or will it control us?  

It's entirely possible to go into alignment work 10 or 15 years from now. If you spend the next decade working at tech companies, working with AI, or starting a company, you're going to end that time with a ton of useful skills that you can bring to alignment research if you want to. But you'll also have flexibility to work on other problems too. 

My advice: pursue profit-driven work when you're young, come back to direct EA work when you're more experienced. 

Comments2
Sorted by Click to highlight new comments since: Today at 10:30 AM
DC
2y5
0
0

What does forcing yourself look like concretely as an anticipated physical experience? What would working on the other stuff you would rather work on look like concretely as an anticipated physical experience?

Is this a thought exercise for figuring out what I would find engagement in?