Hide table of contents

Hi, I'm an 18 year old going into college in a week. I am studying Computer engineering and mathematics. Since I have a technical interest and AGI has a much higher probability ending humanity this century(1/10, I think) than other causes (that I would rather work on, like Biorisks is 1/10,000), would the utility positive thing to do be to force myself to get an ML alignment focused PhD and become a researcher?

I am at a mid-tier university. I think I could force myself to do AI alignment since I have a little interest, but not as much as the average EA. I wouldn't find as much engagement in it, but I also have an interest in starting a for-profit company, which couldn't happen with AGI alignment (most likely). I would rather work on a hardware/software combo for virus detection (Biorisks), climate change, products for 3rd world, other current problems, or other problems that will be found in the future.

Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?

Edit: made some people confused that I had a false dichotomy between "pursuing my passion" and doing EA alignment. Removed that comment.

19

0
0

Reactions

0
0
New Answer
New Comment


9 Answers sorted by

Hey! I think this is a good question that a lot of people have.

I don't think either extreme is a good idea. I wouldn't recommend simply "following your passion", but I also wouldn't recommend "forcing" yourself to do the job that looks best in expectation.

In reality, the best job for you will probably be something you have at least some interest in, especially if it's going to be self-directed theoretical work! But I'd bet there are a lot of things you're interested in, or could be interested in... :)

I'd recommend trying lots of different areas over the next few years, through courses or reading or internships or conversations or summer jobs, and trying to find a few different areas that interest you. Ideally they'd be areas you can also help a lot of people!

I also think it's totally okay to be uncertain about what you're going to work on long-term until your late 20s or even later, so you shouldn't worry about following an interest and later on changing your mind.

I hope that helps and I'd love to see an update in a year or two!

I wasn't proposing that "follow your passion" was the other idea I was going with. I do think that some combination of personal interest and external importance will probably be highest utility for a given personality. I just wanted to make sure that AGI alignment wasn't so great that I would have to practically throw away my feelings for humanity's existence. I have also read a recent post questioning the basis of the "recommended careers" in EA and 80k hours (post). Thanks for the post!

Why do you think you'd need to "force yourself?" More specifically, have you tested your fit for any sort of AI alignment research?

If not, I would start there! e.g., I have no CS background, am not STEM-y (was a Public Policy major), and told myself I wasn't the right kind of person to work on technical research ... But I felt like AI safety was important enough that I should give it a proper shot, so I spent some time  coming up with ELK proposals, starting the AGISF curriculum, and thinking about open questions in the field. I ended up, surprisingly, feeling like I wasn't the most terrible fit for theoretical research!

If you're interested in testing your fit, here are some resources

I could possibly connect you to people who would be interested in testing their fit too, if that's of interest. In my experience, it's useful to have like-minded people supporting you!

Finally, +1 to what Kirsten is saying - my approach to career planning is very much, "treat it like a science experiment," which means that you should be exploring a lot of different hypotheses about what the most impactful (including personal fit etc.) path looks like for you.

edit: Here are also some scattered thoughts about other factors that you've mentioned:

  • "I also have an interest in starting a for-profit company, which couldn't happen with AGI alignment (most likely)."
    • FWIW, the leading AI labs (OpenAI, Anthropic, and I think DeepMind) are all for-profits. Though it might be contested how much they contribute to safety efforts, they do have alignment teams.
  • "would the utility positive thing to do be to force myself to get an ML alignment focused PhD and become a researcher?"
    • What do you mean by "utility positive" - utility positive for whom? You? The world writ large?
    •  
  • "Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?"
    • I don't think anyone can answer this besides you. : ) I also think there are at least three questions here (besides the question of what you are good at/what you like, which imo is best addressed by testing your fit)
      • How important do you think AI alignment is? How confident are you in your cause prioritization?
      • How demanding do you want EA to be?
      • How much should you defer to others?
         

I agree with most points raised in the answers so far. Some specific points I feel worth mentioning:

  • I think 1/10,000 probability of biorisk this century is too low.
    • I'm more certain of this if you define biorisk as something like "global catastrophic biorisk that is causally responsible for dooming us" than if you think of it as "literally extinction this century from only bio"
  • I think it's probably good for you to explore a bunch and think of interests you have in specific tasks rather than absolute interests in AI alignment vs biorisk as a cause area.
  • "Follow your gut (among a range of plausible options) rather than penciled Fermis" makes gut sense but pencils badly. I don't have a strong robust answer to how to settle this deliberation.
  • Fortunately exploring is a fairly robust strategy under a range of reasonable assumptions.

Humans seem to be notoriously bad at predicting what will make us most happy, and we don’t realize how bad we are at it. The typical advice "Pursue your passion" seems like a bad advice since our passion often develops parallel to other more tangible factors being fulfilled. I think 80,000 Hours' literature review on "What makes for a dream job" will help you tremendously in better assessing whether you would enjoy a career in AI alignment. 

A better question would be my effectiveness rather than my passion. How much more effective would I have to be in other cause areas and low in alignment related skills  such that it would be good to select other activities (or vis versa)? Over time, I can keep asking the questions with new information.

1
Per Ivar Friborg
Ahh yes this is also a good question which I don't have a good answer to, so I support your approach in revisiting this question over time with new information. With very low confidence, I would expect that there would become more ways to aid with AGI alignment indirectly as the space grows. A broader variety of ways to contribute to AGI alignment would then make it more likely for you to find something within that space that matches your personal fit. Generally speaking, examples of ways to indirectly contribute to a cause could be something like operations, graphic design, project management, software development, and community building. My point is that there are likely many different ways to aid in solving AGI alignment, which increases the chances of finding something you have the proper skills for. Again, I place very low confidence on this since I don't think I have an accurate understanding of the work needed within the space of AGI alignment at all. This is more meant as an alternative way of thinking about your question. 

Hi Isaac! We're in a similar situation: I'm 19, studying Computer Science at a mid-tier university, with a strong interest in AI alignment (and EA in general). Have you gone through the 80,000 Hours career guide yet? If not, it should give you some clarity. It recommends that we just focus on exploration and gaining career capital right now, rather than choosing one problem area or career path and going the whole hog.

I have been through the 80,000 career guide before. I want to be a for-profit startup founder in an area  with a I can move over to CS and Math double major early and focus on ML and ML related research in undergrad. Since it is competitive, especially at top schools, knowing early is better than later. I mostly agree, but disagree a little bit with some of the career capital advice. I think they speak a little too broadly about transferable skills. I don't think I should spend the first 5-10 years of my life tangently related to what I want to do to ... (read more)

4
aog
I like the idea of building skills early in your career. I would specifically highlight being opportunistic — individual people run into unusual opportunities to do difficult and valuable things, and especially early in your career it’s worth doing them. This is mostly based on my n=1 personal experience, I wrote a little more about it here: https://forum.effectivealtruism.org/posts/AKLMZzsTc2ghxohhq/how-and-when-should-we-incentivize-people-to-leave-ea?commentId=w6RJ7L6H4nrJhFyAM I’m now working on AI safety and I’ve been really happily surprised at the opportunities available to people trying to get into the field. There are courses (AGISF, ML Safety Scholars), internship / research opportunities, various competitions, and EAF + LW to showcase public writing. I basically worked on non-EA stuff for three years while reading about AI Safety for fun and ended up pretty well positioned to work on it.
2
Kirsten
Agreed on your perspective on career capital

I think it's too early to decide specifically what you're going to work on for your career. I would just put your head down and focus mostly on learning math for a couple years then have a rethink once you're a 3rd year or so. As long as you have good math and CS skills, you will have many options later.

Hi Isaac, I agree with many other replies here. I would just add this:

I think AI alignment research could benefit from a broader range of expertise, beyond the usual 'AI/CS experts + moral philosophers' model that seems typical in EA approaches.

Lots of non-AI topics in computer science seem relevant to specific AI risks, such as crypto/blockchain, autonomous agents/robotics, cybersecurity, military/defense applications, computational biology, big data/privacy, social media algorithms, etc. I think getting some training in those -- especially the topics best aligned with your for-profit business interests -- would position you to make more distinctive and valuable contributions to AI safety discussions. In other words, focus on the CS topics relevant to AI safety that are neglected, and not just important and tractable.

Even further afield, I think cases could be made that studying cognitive science, evolutionary psychology, animal behavior, evolutionary game theory, behavioral economics, political science, etc. could contribute very helpful insights to AI safety -- but they're not very well integrated into mainstream AI safety discussions yet.

You ask, "Is it certain enough that AI alignment is so much more important that I should forgo what I think I will be good at/like to pursue it?"

One argument against pursuing AI alignment is that it's very unlikely to work.   So long as humans are in any way involved with AI, weaknesses of the human condition will be a limiting factor which will prevent AI from ever being a safe technology.   

If I was in your position, smart, educated, with a long life ahead of me, and really wanted to have a meaningful impact,  I would focus on the machinery which is generating all these threats, the knowledge explosion.

From the perspective of almost old age, I would advise you not to follow the "experts" who are focused on the effort to manage the products of an ever accelerating knowledge explosion one by one by one, as that effort is doomed to failure.   

Perhaps you might consider this thought experiment.  Imagine yourself working at the end a factory assembly line.  The products are coming down the line to you faster, and faster, and faster.   You can keep up for awhile by working hard and being smart, but at some point you will be overwhelmed unless you can take control of the assembly line, and slow it to a pace you can manage.    

That's the challenge which will define the 21st century.   Will we learn how to control the knowledge explosion?  Or will it control us?  

It's entirely possible to go into alignment work 10 or 15 years from now. If you spend the next decade working at tech companies, working with AI, or starting a company, you're going to end that time with a ton of useful skills that you can bring to alignment research if you want to. But you'll also have flexibility to work on other problems too. 

My advice: pursue profit-driven work when you're young, come back to direct EA work when you're more experienced. 

Comments2
Sorted by Click to highlight new comments since:
DC
5
0
0

What does forcing yourself look like concretely as an anticipated physical experience? What would working on the other stuff you would rather work on look like concretely as an anticipated physical experience?

Is this a thought exercise for figuring out what I would find engagement in?

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f