[ Question ]

Should I transition from economics to AI research?

by EAguy1 min read28th Feb 202114 comments

18

Requests (open)EconomicsCareer choice
Frontpage
  • I'm looking to collect data (aka people's opinions and personal experience) to inform my decision on whether or not to transition from economics to another field, for instance AI research.
  • For a bit of context: I'm a PhD student in economics at a medium-ranked American university. Lately, I've become increasingly convinced by longtermist argument and less convinced about the ability of economics as a science to tackle important longtermist questions.
  • I suppose I have three main uncertainties that would help me make a decision.
    • First, how much more impact can I expect to have as an AI researcher compared to as an economist? How should I think about this?
    • Second, how easy would it be for me to transition from economics to AI? I would particularly appreciate to hear people's experiences on how they have done this (or have tried to do this), how easy it was, what are the pitfalls to avoid, etc. Relatedly, I'm also interested to hear reasons not to drop out of my program (for example, CV-related concerns if it might make it harder for me to get a job because I'm seen as flaky).
    • Finally, I'm curious to hear if there are other options I should consider. For instance, I could see myself trying to become a quantitative trader if it is relatively easier to transition to this type of positions given my background, and/or the relative expected impact as a quantitative trader is higher.
  • Thanks!
New Answer
Ask Related Question
New Comment

2 Answers

My guess is that AI safety papers are more impactful than longtermist econ ones, since they are directly targeted at significant near-term risks. Having said that, there are now a hundred or so people working on various aspects of long-term AI safety, which is more than can be said for longtermist econ, so I don't think the impact-difference is huge. Maybe we're talking about a three-fold difference in impact, per unit time invested by an undifferentiated EA researcher - something that could easily be overriden by personal factors. But many longtermist researchers would argue that the impact difference is much more or less.

My experience in transitioning from medicine to AI is that it was very costly. I feel I was set back by ~5 years in my intellectual and professional development - I had to study a masters degree and do years of research assisting and junior research work to even get back to my previous level of knowledge and seniority. From an impact standpoint, I clearly had to exit medicine, but it's not clear that moving to AI safety had any greater impact than moving into (for example) biosecurity would have.

For most people in a PhD in any long-term relevant subject (econ, biology, AI, stats), with a chance of a tenure-track position at a top-20 (worldwide) school, I expect it will make sense to push for that for at least ~3 years, and to postpone worries about pivoting until after that. Because switching subjects reduces those odds a lot.

More broadly, as a community,  we mostly ought to ask people to pivot their careers when they are young (e.g. in an undergraduate), or when the impact differential is large  (e.g. medicine to biosecurity). Which I don't think it really is when you're contrasting the right parts of econ with AI safety.

Finally, I imagine quant trading is a non-starter for a longtermist who is succeeding in academic research. As a community, suppose we already have significant ongoing funding from 3 or so of the world's 3k billionaires. What good is an extra one-millionaire? Almost anyone's comparative advantage is more likely to lie in spending the money, but even more so if one can do so within academic research.

Finally, I imagine quant trading is a non-starter for a longtermist who is succeeding in academic research. As a community, suppose we already have significant ongoing funding from 3 or so of the world's 3k billionaires. What good is an extra one-millionaire? Almost anyone's comparative advantage is more likely to lie in spending the money, but even more so if one can do so within academic research.

It seems quite wrong to me to present this as so clear-cut. I think if we don't get major extra funding the professional longtermist community might plateau at a stable size in perhaps the low thousands. A successful quantitative trader could support several more people at the margin (a very successful trader could support dozens). If you're a good fit for the crowd, it might also be a good group to network with.

If you're particularly optimistic about future funding growth, or pessimistic about community growth, you might think it's unlikely we end up in that world in a realistic timeframe, but there's likely to still be some hedging value.

To be clear, I mostly wouldn't want people in the OP's situation to drop the PhD to join a hedge fund. But it's worth understanding that e.g. the main... (read more)

3EAguy7moThanks a lot for your comment. What you describe is a different route to impact than what I had in mind, but I suppose I could see myself do this, even though it sounds less exciting than making a difference by contributing directly to making AI safer.
4Owen_Cotton-Barratt7moNote that I think that the mechanisms I describe aren't specific to economics, but cover academic research generally-- and will also include most of how most AI safety researchers (even those not in academia) will have impact. There are potentially major crux moments around AI, so there's also the potential to do an excellent job engineering real transformative systems to be safe at some point (but most AI safety researchers won't be doing that directly). I guess that perhaps the indirect routes to impact for AI safety might feel more exciting because they're more closely connected to the crucial moments -- e.g. you might hope to set some small piece of the paradigm that the eventual engineers of the crucial systems are using, or hope to support a culture of responsibility among AI researchers, to make it less likely that people at the key time ignore something they shouldn't have done.

Thanks for sharing your experience and thanks your comment! This is very useful! 

For most people in a PhD in any long-term relevant subject (econ, biology, AI, stats), with a chance of a tenure-track position at a top-20 (worldwide) school, I expect it will make sense to push for that for at least ~3 years, and to postpone worries about pivoting until after that. Because switching subjects reduces those odds a lot.

I had two follow-up questions: first, do you think there is a big difference in impact between getting a tenure-track position at a top-20 ... (read more)

4RyanCarey7moI would guess medium-big, especially if your route to impact is teaching PhD students (or anything that requires a lot of funding), as opposed to governmental advising (or anything that doesn't). We won't really know until we see someone study the question. My guess is that for most switchers, the PhD program would be worse than the current one (AI is more competitive than econ, and age works against you), and so due to that, they would likely end up in a worse tenure track position. Plus the impact is delayed and some of it foreclosed by retirement. So the cost seems decent-sized.

What sort of economics are you doing?

Have you checked out the research agenda of the Global Priorities Institute? There are some important questions in longtermism and general cause prioritisation that can be tackled by economists.

There are also more applied AI/tech focused economics questions that seem important for longtermists (eg if GPI stuff seems to abstract for you)

1jackmalde7moYes this 80,000 Hours article [https://80000hours.org/articles/research-questions-by-discipline/#introduction] has some good ideas
1EAguy7moThanks for this!

I'm in my first-year so I haven't specialized in a subfield yet. I've checked out GPI's research agenda but most economics-y questions look very abstract. 

2jackmalde7moYeah that's fair. Although just in case you don't know, they updated it in October 2020 which included a slight expansion in the economics topics. For example, a notable addition was the "Economic growth, population growth and inequality" section which is perhaps less abstract than some of the other areas.
2 comments, sorted by Highlighting new comments since Today at 9:48 PM

So I am a philosopher and thus fundamentally unqualified to answer this question. So take these thoughts with a grain of salt. However:

  1. From my outsider's perspective, it seems as though AI safety uses a lot of concepts from economics (especially expected utility theory). And if you're at the grad level in economics, then you probably have a decent math background. So at least many of your skills seem like they would transfer over.
  2. I don't know how much impact you can expect to have as an AI researcher compared to an economist. But that seems like the kind of question an economist would be well-equipped to work on answering! If you happen to not already be familiar with cause prioritization research, you might consider staying in economics and focusing on it, rather than switching to AI, as cause prioritization is pretty important in its own right.
  3. Similarly, you might focus on global priorities research: https://forum.effectivealtruism.org/posts/dia3NcGCqLXhWmsaX/an-introduction-to-global-priorities-research-for-economists. Last I knew the Global Priorities Institute was looking to hire more economists; don't know if that will still be true when you finish your grad program, but at the very least I expect they'll still be looking to collaborate with economists at that time.

In other words, it seems like you might have a shot at transitioning (though I am very, very unqualified to assess this), but also there seem to be good, longtermist-relevant research opportunities even within economics proper.

Thanks for all this! I'm not familiar with AI safety, and even if some concepts are used both in AI and economics, I suspect there would still be a lot of retraining involved, but I could be wrong. 

I'll take a look at the blog posts you mentioned!