This post doesn’t necessarily represent the views of my employers.
There are many people who have the skills and desire to do EA-aligned research, or who could develop such skills via some experience, mentorship, or similar.
There are many potentially high-priority open research questions that have been identified.
And there are many funders who would be happy to pay for high-quality research on such questions.
Sounds like everything must be lining up perfectly, right?
In my view, the answer is fairly clearly “No”, and getting closer to a “Yes” could be very valuable. The three ingredients mentioned above do regularly combine to give us new, high-quality research and researchers, but:
- This is happening slower than we’d like
- At any given time, we still have a lot of each ingredient left over
- This is requiring more “overhead” than seems ideal
- E.g., lots of 1-1 career advice, coaching, and mentorship from experienced people; time-consuming hiring and grant evaluation processes
- There are more “misfires” than we’d like
- E.g., aspiring researchers choosing low-priority questions or tackling questions poorly; great people and projects being passed over for hiring or funding
In this sequence, I try to:
- Provide a clearer description of what I see as the “problem”, its drivers, and its consequences
- Outline some goals we might have when designing interventions to improve the EA research pipeline
- Overview 18 interventions options that seem worth considering
- Describe one of those intervention options in more detail, in hopes that that leads to either a good argument against that option or to someone actually building it.
This sequence is primarily intended to inform people who are helping implement or fund interventions to improve the EA-aligned research pipeline, or who could potentially do so in future.
This sequence may also help people who hope to themselves “enter” and “progress through” the EA-aligned research pipeline.
Epistemic status / caveats for the sequence
I’m confident that these posts will usefully advance an important discussion. That said, I expect my description of the “problem” and my list of “goals” could be at least somewhat improved. And it’s possible that some of my ideas for solutions are just bad and/or that I’ve missed some other, much better ideas.
I’ve done ~6 FTE months of academic research (producing one paper) and ~11 FTE months of research at EA orgs. My framings and suggestions are probably somewhat skewed towards:
- non-academic research
- research for EA audiences (rather than for e.g. mainstream academics or policymakers)
- longtermist research and global priorities research
I've spent roughly 50 hours actually writing, editing, or talking about these posts. Additionally, the topics they address are probably one of the 3-10 things I’ve spent the most time thinking about since early 2020. That said, there are various relevant bodies of evidence and literature that I haven’t dived into, such as metascience.
It also seems worth saying explicitly that:
- Many people should do work other than EA-aligned research
- This includes even many people who have the skills and desire to do EA-aligned research (since something else might be an even better fit for them, or even more impactful)
- Indeed, I think one thing we should want from improvements to the EA research pipeline is a reduction in how much time people who actually shouldn’t do EA-aligned research spend trying, training for, or pursuing such work
- EA-aligned research does not necessarily have to be done at explicitly EA organisations
- E.g., one could research important topics in valuable ways at a regular think tank or academic institution
- See also Working at EA vs Non-EA Orgs
Related previous work
I am far from the first person to discuss this cluster of topics. The following links may be of interest to readers of this post, and some of them informed my own thinking substantially:
- Posts tagged Scalably using labour and/or Research Training Programs
- Benjamin Todd on what the effective altruism community most needs
- A comment thread from an AMA with Owen Cotton-Barratt
- Bottlenecks and Solutions for the X-Risk Ecosystem
And here are some links that are somewhat relevant, but less so:
- Factored Cognition
- Readings and notes on how to do high-impact research
- Ingredients for creating disruptive research teams
- After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
- Posts tagged Get Involved, Working at EA vs Non-EA Orgs, and/or EA Hiring
I also previously touched on related issues in my post A central directory for open research questions.
For comments on earlier drafts of one or more of these posts, I’m grateful to Nora Ammann, Edo Arad, Jungwon Byun, Alexis Carlier, Ryan Gourley, David Janků, Julian Jamison, Peter Hurford, David Moss, David Reinstein, and Linch Zhang. For earlier discussions that did or may have informed these posts, I’m grateful to many of the same people and to Ryan Briggs, Stanislava Fedorova, Ozzie Gooen, Alex Lintz, Amanda Ngo, Jason Schukraft, and Jesse Shulman. In some places, I’m directly drawing on or remixing specific ideas from one or more of these people. That said, these posts do not necessarily represent the views of any of these people.
For example, Rethink Priorities recently received ~665 applications for a summer research internship program, with only ~10 internship slots available. Given the limited slots available, we had to reject at stage 2 many applicants who seemed potentially quite promising, and reject at stage 3 some candidates we were fairly confident we’d have been happy to hire if we had somewhat more funding and management capacity. ↩︎
I think this also matches the views of many other people; see “Related previous work”. ↩︎
Yes, 18. Things got a little out of hand.
My original draft of this post briefly summarised those intervention options, but some commenters suggested that I refrain from mentioning potential solutions till readers had read and thought more about the problems and goals we’re aiming to solve. See also Hold Off On Proposing Solutions. ↩︎