Hide table of contents

See the post introducing this sequence for context, caveats, credits, and links to prior discussion relevant to this sequence as a whole. This post doesn’t necessarily represent the views of my employers.

In a previous post, I highlighted some observations that I think collectively demonstrate that the current “pipeline” for producing EA-aligned research and researchers is at least somewhat insufficient, inefficient, and prone to error. In my next three posts, I’ll describe several possible interventions to improve the pipeline, such as running research training programs, increasing and improving EAs’ use of non-EA options for research training and credentials (e.g., PhD programs),[1] and creating a database of information on research project ideas.

But first, in this post, I’ll discuss some goals we might want to have in mind when designing, evaluating, and/or implementing interventions to improve the pipeline. These goals could be roughly split into three categories:

  1. Goals focused on improving aspiring/junior researchers' expected future impact, via helping them with:
    • Building knowledge, skills, etc.
    • Network-building
    • Testing fit
    • Gaining credible signals of fit
    • Career planning
  2. Goals focused on improving the EA-aligned research pipeline, via helping with:
    • Learning about the intervention(s)
    • Providing mentorship/management training
    • Getting resources for the intervention
  3. Other goals:
    • Direct impact (of the research produced as a direct, near-term result of the intervention)
    • Increasing awareness of and inclination towards EA-relevant things (among the aspiring/junior researchers, their "mentors", or other people)
    • Increasing demographic and/or cognitive diversity
    • Enjoyment (among people interacting with the intervention)
    • Avoiding downside risks

(Note that some of those "Other goals" are partly about improving aspiring/junior researchers' expected future impact and/or improving the EA-aligned research pipeline, but they don't focus primarily on just one of those things.)

Target audience and purpose of this post

As with the rest of this sequence:

  • This post is primarily intended to inform people who are helping implement or fund interventions to improve the EA-aligned research pipeline, or who could potentially do so in future
  • But it may also help people who hope to themselves “enter” and “progress through” the EA-aligned research pipeline

I think there are two ways this post could be useful for such people:

  • Reading this list of goals may help make the precise nature of the problem clearer, by providing an alternative, complementary framing to that given in my previous post.
  • Having these goals and some commentary on them written down could help ensure people don’t forget to consider some of them, and could help people think about how much they care about each goal relative to the other goals.
    • This in turn seems useful for the generic reason that one is likely to come up with different intervention ideas - and implement them differently - depending on what goals one has in mind and prioritises. (See the end of the post for two examples relevant to this context.)

Caveats

  • When you're currently designing, evaluating, and/or implementing an intervention for improving aspects of the EA research pipeline, you should of course also think for yourself about what goals are relevant to your specific situation.
    • And you should also probably consider doing things like conducting interviews or surveys with potential “users” or “experts”.
    • This post is adapted from a list of possible goals I thought of for a particular research training program; it’s possible I would’ve come up with a different list if, for example, I’d been thinking of other types of interventions from the start, or had gathered more systematic data on what various groups’ needs are
  • Some of the goals I mentioned overlap in a way that might cause confusion or create a risk of double-counting the benefits of one solution relative to another.
  • I’d welcome pushback or additional ideas regarding my list, labels, or descriptions.

Possible goals

Goals focused on improving aspiring/junior researchers' expected future impact

Building knowledge, skills, etc.

Meaning: Allowing the aspiring/junior researchers to build knowledge, skills, habits, mindsets, etc. that help them do other impactful things later.

(This includes skills like “research intuitions” or “research taste” and habits/mindsets like proactiveness and self-direction.)

Network-building

Meaning: Allowing or helping aspiring/junior researchers to build connections with each other, with mentors (where relevant), and with other people they interact with in relation to the intervention.

These connections could help aspiring/junior researchers or those they’re connected with in finding projects to do, having someone to vouch for them, executing projects (e.g., via exchanging info on some topic), etc.

Testing fit

Meaning: Allowing aspiring/junior researchers to test their own fit (passion, skills, etc.) for research careers, specific fields, specific topics, etc.

Something that could fit either here or in “Knowledge and skills” is helping aspiring/junior researchers have the confidence (where actually warranted) to apply for things, tackle projects productively without too much second-guessing or risk of burnout, etc.

Gaining credible signals of fit

Meaning:

  • Making it easier for aspiring/junior researchers to get future roles/projects that they should get
  • Maybe making it less likely that aspiring/junior researchers get future roles/projects that they shouldn’t get
    • This seems less important, but might in some cases be beneficial for both the researcher and for other people
  • Reducing the time cost required by others in order to find or vet aspiring/junior researchers for roles/projects
    • Hiring rounds can be expensive and can fail to include potentially great candidates

Ways an intervention could achieve those goals include:

  • Allowing an aspiring/junior researcher to produce outputs that signal their skills, interests, etc.
  • Increasing the chances that a researcher’s outputs actually reach relevant audiences and influence their decisions, which could help with signalling, allow the researcher to have references for applications, etc.
  • Allowing mentors or others to get a sense of a researcher’s skills, interests, etc., beyond the info which the output alone provides
    • These people could then serve as references for the researcher or could themselves hire provide the researcher with a job or funding
  • Maybe allowing an aspiring/junior researcher to try to produce outputs that would signal their skills, interests, etc., such that, when some of them fail to produce such outputs, this serves as more meaningful evidence that they shouldn’t be given the role/project

Career planning

Meaning: Providing aspiring/junior researchers with time, structures/guidance, encouragement/nudges, and feedback for career planning.

Goals focused on improving the EA-aligned research pipeline

Learning about the intervention(s)

Meaning:

  • Gaining valuable info about things like how useful a given intervention is for improving the pipeline, how to make it better, and whether it’d be worth making variants of it for certain specific purposes (e.g., specifically for PhD students, specifically for non-academic work, or specifically for people who aren’t part of the EA community).
  • Gaining practice implementing a given intervention, such that one could do so more effectively in future.

Providing mentorship/management training

Meaning: Allowing mentors/managers to develop their knowledge, skills, and confidence in mentoring/managing others.

This could occur as a result of things like the mentors/managers getting more experience in those roles, being provided with tips and resources, or going through formal training.

This could help them better mentor/manage other people in future, either as part of the same intervention, at the org they work for, or elsewhere.

Getting resources for the intervention(s)

The most notable resources for many relevant interventions are probably funding and mentor time.[2]

Pursuing this goal could include:

  • Making it sufficiently clear that the intervention is sufficiently promising that it’s worth providing the funding necessary to allow the intervention to continue to exist or be improved
  • Maintaining good relationships with mentors, and ensuring the aspiring/junior researchers they’re paired with will seem promising and like good fits for that mentor

(Of course, achieving this goal is only a good thing if the intervention really is sufficiently promising to warrant these resources!)

Other goals

Direct impact

Meaning: The impact that results from the research outputs that the researcher produces as a direct result of the intervention (e.g., while participating in a research training program), as opposed to the impact that comes later and indirectly (e.g., via the researcher being more skilled or getting a more impactful job).

Increasing people's awareness of and inclination towards EA-relevant things

(See the Awareness/Inclination Model in How valuable is movement growth?)

Meaning: Increasing awareness of, inclination towards, and engagement with EA or particular EA-relevant causes, interventions, etc., in order to allow for benefits like more donations, better allocated donations, better career plans, or more and better EA-aligned research produced by non-EAs.

These changes in awareness, inclination, and engagement could occur among:

  • the aspiring/junior researchers reached by the intervention
  • their “mentors”
  • other people who interact with or observe those people, their outputs, or the intervention

For example, a well-run research training program that involves high-quality events and produces fairly high-quality research outputs might also increase mentors’ and observers’ inclination towards EA and the cause areas the program focused on.

This goal is very related to “movement building”. Part of this goal could be referred to as "improving non-participants’ views and priorities".

Increasing demographic and/or cognitive diversity

Meaning: Increasing demographic and/or cognitive diversity[3] in EA as a whole, in specific orgs, or in the communities of people working on high-priority issues.

Demographic and cognitive diversity are quite distinct things, and actions that improve one kind of diversity will not necessarily improve the other. But often both goals could achieved via the same types of interventions, such as interventions aimed at:

  • Recruiting/training new EA-aligned researchers who are from demographic groups that aren’t currently well represented in EA or in the relevant orgs/communities, or whose perspectives, thinking styles, etc. aren’t currently well-represented in those places
  • Retaining such people in such places[4]

See also posts tagged Diversity and Inclusion.

Enjoyment

Meaning: Aspiring/junior researchers - and their “mentors”, where relevant - having a pleasant experience.

Avoiding downside risks

Relevant downside risks could include:

  • Information hazards
  • Risks to the reputation or relationships of EA or of important organisations/cause areas/communities
  • Hard-to-reverse harms to the future career prospects of the aspiring/junior researcher[5]
  • Causing people to focus too much or too little on research roles and/or roles at explicitly EA orgs

(For further discussion of accidental harm/downside risks in general, see here.)

Two examples of how different goals could push in different directions

These are just two examples, and just for illustration.

Example 1

The more weight we put on the goal of “Experimentation with the intervention(s)”, the more we should try a range of interventions, try lots of different specific choices within each intervention, gather data, write reflections as we go, share our reflections with other people afterwards, etc.

Example 2

Imagine we’re choosing which of the following to do:

  1. Things like guiding aspiring/junior researchers towards working on especially high-priority topics for their undergraduate or graduate theses
  2. Things like guiding aspiring/junior researchers towards getting great general research training or working under excellent research supervisors (even if those supervisors work on less relevant topics)

Typically, we should probably focus more on the former type of thing the more weight we put on the goals of “Increasing people's awareness of and inclination towards EA-relevant things” and “Direct impact”, but more on the latter type of thing the more weight we put on “Building knowledge and skills”.


As noted earlier, I’d welcome pushback or additional ideas regarding my list, labels, or descriptions.

In light of these goals, my next post will overview 19 intervention options improving the EA-aligned research pipeline.


  1. I’m using the term “EAs” as shorthand for “People who identify or interact a lot with the EA community”; this would include some people who don’t self-identify as “an EA”. ↩︎

  2. Other resources could include advisors and “advocates” for the intervention itself. (Advocates could include people who just encourage other people to use the intervention, faculty members who help provide official university endorsement for a research training program, etc.) ↩︎

  3. “Cognitive diversity has been defined as differences in perspective or information processing styles” (Reynolds & Lewis, 2017) ↩︎

  4. Additionally, one could aim to increase the chance that a given person maintains (parts of) the perspectives, thinking styles, etc. that they have which are somewhat uncommon in EA. For example, one could try to reduce pressures or incentives favouring intellectual conformity, or could encourage people to form and share their independent impressions. ↩︎

  5. For convenience, I’ll sometimes lump various different types of people together under the label “aspiring/junior researchers”. I say more about this group of people in the previous post of this sequence. ↩︎

Comments3
Sorted by Click to highlight new comments since: Today at 4:48 AM

On approximately this topic, I also highly recommend the final section of Allan Dafoe’s post AI Governance: Opportunity and Theory of Impact. I think the ideas in that final section can be applied (with some modifications) to a wide variety of domains other than AI governance. 

Here’s an excerpt from that section: 

Within any given topic area, what should our research activities look like so as to have the most positive impact? To answer this, we can adopt a simple two stage asset-decision model of research impact. At some point in the causal chain, impactful decisions will be made, be they by AI researchers, activists, public intellectuals, CEOs, generals, diplomats, or heads of state. We want our research activities to provide assets that will help those decisions to be made well. These assets can include: technical solutions; strategic insights; shared perception of risks; a more cooperative worldview; well-motivated and competent advisors; credibility, authority, and connections for those experts. There are different perspectives on which of these assets, and the breadth of the assets, that are worth investing in.

On the narrow end of these perspectives is what I’ll call the product model of research, which regards the value of funding research to be primarily in answering specific important questions. The product model is optimally suited for applied research with a well-defined problem. [...]

I believe the product model substantially underestimates the value of research in AI safety and, especially, AI governance; I estimate that the majority (perhaps ~80%) of the value of AI governance research comes from assets other than the narrow research product[7]. Other assets include (1) bringing diverse expertise to bear on AI governance issues; (2) otherwise improving, as a byproduct of research, AI governance researchers' competence on relevant issues; (3) bestowing intellectual authority and prestige to individuals who have thoughtful perspectives on long term risks from AI; (4) growing the field by expanding the researcher network, access to relevant talent pools, improved career-pipelines, and absorptive capacity for junior talent; and (5) screening, training, credentialing, and placing junior researchers. Let’s call this broader perspective the field building model of research, since the majority of value from supporting research occurs from the ways it grows the field of people who care about long term AI governance issues, and improves insight, expertise, connections, and authority within that field.

Ironically, though, to achieve this it may still be best for most people to focus on producing good research products.  

An example I originally had in the final section

One type of intervention for improving the EA-aligned research pipeline is creating and/or improving research training programs. When doing so, I think three important, related questions are:

  • To what extent should the participants decide for themselves what projects to work on, vs deferring to the decisions/recommendations of others?
  • What fraction of each participant’s time should be spent tackling full research projects themselves, vs doing “delegated subtasks” / producing “intermediate products” (e.g., finding literature relevant to a question for someone else)?
    • (Maybe there’s a more elegant or standard way to describe this question.)
  • To what extent should participants be tackling a “fresh” question, vs tackling something closely related to what a mentor/manager is already working on?

I originally thought that, the more weight one puts on the goals of “credible signals of fit”, “knowledge and skills”, and “testing fit”, the more that would push in favour of participants deciding for themselves which projects to work on, tackling full research projects themselves, and tackling “fresh” questions. And I thought that putting more weight on “direct impact” would push in the opposite direction. 

But after some further thinking, a call with someone who's a research assistant to a great researcher, and some comments on a draft of this post, I now think this will vary a lot depending on the specifics. For example, doing “delegated subtasks” might lead to more and better feedback, since the delegator actually needs and will use what the aspiring/junior researcher produced, and that feedback should help with building knowledge and skills.

That said, I still think that thinking about the goals outlined in this post should help one think through the above questions in light of the specific situation they’re facing.

One type of "credible signal of fit" is referrals, statements, etc. from people who have gained info on an aspiring/junior’s skills, knowledge, interests, etc. I’m unsure how much this matters, but here are some notes on that. (I put these notes in a comment because they’re not especially important or well-informed.) 

I think there are two main reasons why things like referrals could help people get jobs they should get, when otherwise they wouldn’t:

  1. Some EA-aligned orgs do network-based hiring rather than open hiring rounds. Someone who knows about a network-based hiring round and knows about the skills, knowledge, interests, etc. of some aspiring/junior researchers could recommend those researchers to the org that’s hiring.
    1. (Some non-EA-aligned orgs also do network based hiring, but realistically someone involved in running a relevant intervention probably won’t know when those orgs are doing those hiring rounds, and those orgs might not care about the person’s recommendations anyway.)
  2. Even open hiring rounds often don’t get applications from certain people who’d be great for the role, because those people hadn’t heard of it, ruled themselves out as a bad fit, or didn’t know if it was worth the time to apply. 
    1. People who have info on an aspiring/junior researcher’s fit could recommend that that aspiring/junior researcher applies to a potentially fitting thing.
    2. Or they could tell various orgs about what this aspiring/junior researcher seems like a fit for, so that the orgs can reach out if and when appropriate. 

But I think that those points are probably not very important, because:

  1. Open hiring rounds are more common than network-based hiring (I think?), and my tentative independent impression is that network-based hiring should perhaps be even rarer than it is (I’m pretty unsure of that, though). 
  2. Someone who’d be a great fit for something might have a decent chance of being recommended to it for some other reason, even if the people running a relevant intervention don’t recommend them? 
    1. E.g., the person might do great in a work test for some other org, and have their name passed along by that org

I think a similar set of four points could perhaps be made about funding, collaborations, etc., rather than jobs?

Additionally, things like referrals could reduce the time cost required by others in order to find or vet aspiring/junior researchers for roles/projects, and could perhaps make it less likely that aspiring/junior researchers get future roles/projects that they shouldn’t get (which could be good both for those aspiring/junior researchers and for the world; see also). 

Curated and popular this week
Relevant opportunities