This post doesn’t necessarily represent the views of my employers.
There are many people who have the skills and desire to do EA-aligned research, or who could develop such skills via some experience, mentorship, or similar.
There are many potentially high-priority open research questions that have been identified.
And there are many funders who would be happy to pay for high-quality research on such questions.
Sounds like everything must be lining up perfectly, right?
In my view, the answer is fairly clearly “No”, and getting closer to a “Yes” could be very valuable. The three ingredients mentioned above do regularly combine to give us new, high-quality research and researchers, but:
- This is happening slower than we’d like
- At any given time, we still have a lot of each ingredient left over
- This is requiring more “overhead” than seems ideal
- E.g., lots of 1-1 career advice, coaching, and mentorship from experienced people; time-consuming hiring and grant evaluation processes
- There are more “misfires” than we’d like
- E.g., aspiring researchers choosing low-priority questions or tackling questions poorly; great people and projects being passed over for hiring or funding
In this sequence, I try to:
- Provide a clearer description of what I see as the “problem”, its drivers, and its consequences
- Outline some goals we might have when designing interventions to improve the EA research pipeline
- Overview 18 interventions options that seem worth considering
- Describe one of those intervention options in more detail, in hopes that that leads to either a good argument against that option or to someone actually building it.
This sequence is primarily intended to inform people who are helping implement or fund interventions to improve the EA-aligned research pipeline, or who could potentially do so in future.
This sequence may also help people who hope to themselves “enter” and “progress through” the EA-aligned research pipeline.
Epistemic status / caveats for the sequence
I’m confident that these posts will usefully advance an important discussion. That said, I expect my description of the “problem” and my list of “goals” could be at least somewhat improved. And it’s possible that some of my ideas for solutions are just bad and/or that I’ve missed some other, much better ideas.
I’ve done ~6 FTE months of academic research (producing one paper) and ~11 FTE months of research at EA orgs. My framings and suggestions are probably somewhat skewed towards:
- non-academic research
- research for EA audiences (rather than for e.g. mainstream academics or policymakers)
- longtermist research and global priorities research
I've spent roughly 50 hours actually writing, editing, or talking about these posts. Additionally, the topics they address are probably one of the 3-10 things I’ve spent the most time thinking about since early 2020. That said, there are various relevant bodies of evidence and literature that I haven’t dived into, such as metascience.
It also seems worth saying explicitly that:
- Many people should do work other than EA-aligned research
- This includes even many people who have the skills and desire to do EA-aligned research (since something else might be an even better fit for them, or even more impactful)
- Indeed, I think one thing we should want from improvements to the EA research pipeline is a reduction in how much time people who actually shouldn’t do EA-aligned research spend trying, training for, or pursuing such work
- EA-aligned research does not necessarily have to be done at explicitly EA organisations
- E.g., one could research important topics in valuable ways at a regular think tank or academic institution
- See also Working at EA vs Non-EA Orgs
Related previous work
I am far from the first person to discuss this cluster of topics. The following links may be of interest to readers of this post, and some of them informed my own thinking substantially:
- Posts tagged Scalably using labour and/or Research Training Programs
- Benjamin Todd on what the effective altruism community most needs
- A comment thread from an AMA with Owen Cotton-Barratt
- Bottlenecks and Solutions for the X-Risk Ecosystem
And here are some links that are somewhat relevant, but less so:
- Factored Cognition
- Readings and notes on how to do high-impact research
- Ingredients for creating disruptive research teams
- After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
- Posts tagged Get Involved, Working at EA vs Non-EA Orgs, and/or EA Hiring
I also previously touched on related issues in my post A central directory for open research questions.
For comments on earlier drafts of one or more of these posts, I’m grateful to Nora Ammann, Edo Arad, Jungwon Byun, Alexis Carlier, Ryan Gourley, David Janků, Julian Jamison, Peter Hurford, David Moss, David Reinstein, and Linch Zhang. For earlier discussions that did or may have informed these posts, I’m grateful to many of the same people and to Ryan Briggs, Stanislava Fedorova, Ozzie Gooen, Alex Lintz, Amanda Ngo, Jason Schukraft, and Jesse Shulman. In some places, I’m directly drawing on or remixing specific ideas from one or more of these people. That said, these posts do not necessarily represent the views of any of these people.
For example, Rethink Priorities recently received ~665 applications for a summer research internship program, with only ~10 internship slots available. Given the limited slots available, we had to reject at stage 2 many applicants who seemed potentially quite promising, and reject at stage 3 some candidates we were fairly confident we’d have been happy to hire if we had somewhat more funding and management capacity. ↩︎
I think this also matches the views of many other people; see “Related previous work”. ↩︎
Yes, 18. Things got a little out of hand.
My original draft of this post briefly summarised those intervention options, but some commenters suggested that I refrain from mentioning potential solutions till readers had read and thought more about the problems and goals we’re aiming to solve. See also Hold Off On Proposing Solutions. ↩︎
Great initiative @MichaelA. I'm not sure what a 'sequence' does, but I assume this means there'll be a series of related posts to follow, is that right?
Yeah, I think it's basically EA Forum / LessWrong jargon for "series of posts".
There are 4 more posts to come in this sequence, plus ~2 somewhat related posts that I'll tack on afterwards, one of which I've already posted: Notes on EA-related research, writing, testing fit, learning, and the Forum
I’m not fully satisfied with the label I’m currently using for this topic/effort and this sequence. Here are some alternatives that I considered or that other people suggested:
(That's in roughly descending order of how much I like them. And of course I currently prefer the label I'm actually using at the moment.)
I think the current title of the sequence is fine and probably better than the rest of the alternatives you put!
Luke Muehlhauser recently published a new post that's also quite relevant to the topics covered in this sequence: EA needs consultancies
See also his 2019 post Reflections on Our 2018 Generalist Research Analyst Recruiting.
I briefly discussed this with MichaelA offline, but I'm interested in which "pipe" in the pipeline this sequence is primarily covering, but also which pipe it should be primarily covering.
A central example* of the EA-aligned research pipeline might look something like
get interested in EA-> be a junior EA researcher -> be an intermediate EA researcher -> be a senior EA researcher .
As a junior EA researcher, I've mostly been reading this sequence as mostly thinking of the first pipe in this pipeline.
However I don't have a principled reason to believe that this is the most critical component in the EA research pipeline, and I can easily think of strong arguments for later stages.
There's a related question that's pretty decision-relevant question for me, which is that I probably should have some principled take on what fraction of my "meta work-time" ought to be allocated to "advising/giving mentorship to others" vs "seeking mentorship and other ways to self-improve on research."
*Though not the only possible pipeline, eg instead maybe we can recruit senior researchers directly.
Yeah, I agree that this is an important concrete question, and unfortunately I don't have much in the way of useful general-purpose thoughts on it, except:
(It seems possible to work out more specific and detailed advice than that. I'd be keen for someone to do that, or to find and share what's already been worked out. I just haven't done it myself.)
FWIW, I think this sequence is intended to be relevant to many more "pipelines" than just that one (if we make "pipeline" a unit of analysis of the size you suggest), such as:
I think there's basically a lot of pipelines that intersect and have feedback loops. I also think someone can "specialise" for learning about this whole web of issues and developing interventions for them, that many interventions could help with multiple pipes/steps/whatever, etc.
I think that this might sound frustratingly "holistic" and vague, rather than analytical and targeted. But I basically see this sequence as a fairly "birds eye view" perspective that contains within it many specifics. And as I say in the third post:
Relatedly, I don't think this sequence has a much stronger focus on one of those pipes/paths/intervention points than on others, with the exception that I unfortunately don't say much here about dissemination and use of research.
Hey! I've done an audio recording of me reading this for the EA Forum podcast (I'm going to try and get the rest of this sequence in soon)