Hide table of contents

See the post introducing this sequence for context, caveats, credits, and links to prior discussion relevant to this sequence as a whole. This post doesn’t necessarily represent the views of my employers.

This post briefly highlights some things that I (and I think many others) have observed or believe, which I think collectively demonstrate that the current processes by which new EA-aligned research and researchers are “produced” are at least somewhat insufficient, inefficient, and prone to error. (Though I do think that what we have is far better than nothing, that it’s been getting better over time, and that it’s probably better than what most analogous communities/fields would have.)

I think that these observations/beliefs help (a) clarify the exact nature of the “problems” with the EA-aligned research pipeline and (b) hint at what interventions to improve the pipeline would need to do.

In brief:

  1. There are many important open questions
  2. Many orgs and funders want research(ers)
  3. There are many aspiring/junior researchers
  4. Some shouldn’t pursue EA-aligned research roles
  5. Some should pursue and get EA-aligned research roles (but don’t)
  6. Maybe more of them should try independent research?
  7. Independent research attempts could be improved
  8. Non-EA research efforts could be improved
  9. Existing solutions are inefficient and insufficient
  10. Senior people have capacity to help, if efficiently leveraged

That said, there are also many other ways one could break down, frame, and investigate problems in this area. For example, one could focus more on EA-aligned researchers insufficiently learning from or gaining credibility with actors outside the EA community (see also field building).[1] Or one could focus on more specific bottlenecks/pain points for specific groups of people, and conduct interviews and surveys with members of those groups to gather data on that.[2]

The following posts in this sequence will discuss what we could do about the problems highlighted here.

1. Many open questions

There’s a huge amount of high-priority research to be done.

It seems to me that this is strikingly clear, so, for brevity, I won’t provide much further justification here (but I'm happy to do so in the comments, if requested!).

As one piece of evidence, see A central directory for open research questions. Presumably many of those questions aren’t actually high-priority or are no longer “open”, but I’m pretty sure a substantial fraction are, and that’s a substantial fraction of a very large number of questions!

2. Orgs and funders want research(ers)

There are many orgs and funders who would be willing and able to hire or fund people to do such research if there were people who the orgs/funders could trust would do it well (and without requiring too much training or vetting). But not if the orgs/funders think the people are choosing lower-priority questions, are inexperienced, or aren't especially skilled, or if the orgs/funders simply would have a hard time assessing the people and/or their plans. (See also EA is vetting-constrained and Ben Todd discussing organizational capacity, infrastructure, and management bottlenecks.)[3]

3. Many aspiring/junior researchers

There seem to be many EAs[4] who don’t yet have substantial experience doing EA-relevant research and for whom one or more of the following is true:

  • They want to do EA-relevant research long-term
  • They want to test their fit for doing EA-relevant research
  • They want to do a bit of EA-relevant research in order to gain knowledge and skills that will improve their ability to do other high-impact things
    • E.g., by expanding their knowledge of global priorities in order to inform career or donation choices, or learning more about AI in order to inform their actions as a civil servant
  • They want to do a bit of EA-relevant research because that research could itself be valuable
    • This could either be because the person has to do a research project anyway (e.g., a thesis) or because they have some additional time on their hands

For convenience, I will sometimes lump all such people together under the label “aspiring/junior researchers”. But it’s important to note that that includes several quite different groups of people, who face different pain points and would benefit from different “solutions”.

4. Some shouldn’t pursue EA-aligned research roles

Many of those EAs would probably be better off focusing on activities/roles other than EA-aligned research (given their comparative advantage, the limited number of roles currently available, etc.). But it can be hard for those EAs to work out whether that’s the case for them, and thus they might (very understandably!) continue testing fit for such roles, trying to skill up, etc. This can lead to unnecessary time costs for these people, for hirers, for grantmakers, etc.

5. Some should pursue and get EA-aligned research roles (but don’t)

Meanwhile, many other EAs probably do have a comparative advantage for EA-aligned research, and yet self-select out of pursuing relevant jobs, funding, collaborations, or similar, or pursue such things but find it very hard to get them.[5] This may often be because orgs and funders have insufficient ability to vet, train, and manage these people (see also section 2 above).

6. Maybe more should try independent research?

It would probably be good for many EAs to simply try doing (semi-)independent EA-aligned research. (This could take the form of quite “cheap” efforts, like 10 hours trying to produce a blog post. This might help the EAs test, signal, or improve their fit for EA-aligned research, and might produce valuable outputs; see also.) But it seems that many of those EAs either never actually try this, or only get started after a delay that is much longer than necessary.

Some possible reasons for this include:

  • Some of these people are (understandably!) not willing or able to (a) try (semi-)independent research while working or (b) reduce their hours of paid work.
  • They find it hard to think of or find any research questions that seem (a) worth researching in general and (b) well-suited to their skills and interests.
  • They find it hard to determine which questions would be worth researching in general and are well-suited to their skills and interests.
  • They find it hard to pick between the questions that seem to do well on those criteria.
  • They find it hard to know exactly how to operationalise a research question, how to break it into sub-questions, where to start looking for relevant prior work, etc.
  • They find it hard to be motivated to start without some clearer signal that doing so would have a good payoff for them (e.g., an increased chance of a desired job), or that someone thinks they specifically should indeed try doing EA-aligned research.
  • They think or worry that they’ll do a bad or slow job at their current skill level without mentorship.
  • They don’t entirely feel they have “permission” to just get started, or they aren’t/don’t feel sufficiently “agenty”, or something like that.

7. Independent research attempts could be improved

When EAs do try doing EA-relevant research (semi-)independently, it seems that one or more of the following problems often occur:

  • The question they pick is substantially lower priority or less fitting for them than another question they could’ve picked.
  • They do a poor job of operationalising the question, breaking it into sub-questions, looking for relevant prior work, etc.
  • They had to spend a lot of time generating or sifting through question ideas, or figuring out how to operationalise questions, how to break questions into subquestions, where to start looking for relevant prior work, etc.
  • They find it hard to stay motivated and productive and do a good job without mentorship.
  • They don’t have a clear sense of who the target audiences for the research should be, what its path to impact / theory of change should be, etc.
    • As such, the aspiring/junior researcher doesn’t frame the research in the ideal way; doesn’t pursue the ideal constellation of sub-questions; and doesn’t take appropriate actions to ensure their research is critiqued, is built on, influences decisions, and/or is taken as a signal of the researcher’s fit for future projects.

(Additionally, sometimes the researcher does an impressive job, contributing to an org thinking the researcher is probably worth hiring, but the org still lacks sufficient funding or management capacity to hire the researcher.[6])

8. Non-EA research efforts could be improved

There are also a huge number of non-EAs who have to or want to do research projects and who could in theory be tackling higher-priority questions in a more useful/high-quality and efficient way than they currently are.

(Additionally, if these researchers were made aware of higher-priority questions and given tools that helped them tackle questions in a more useful/high-quality and efficient way, that might increase their awareness of, inclination towards, and/or engagement with EA ideas.)

9. Existing solutions are inefficient and insufficient

We have some fairly good ways of partially addressing those problems, such as a handful of research training programs, the post A central directory for open research questions, and 1-1 conversations between senior and aspiring/junior researchers (to help the aspiring/junior researchers learn about open questions, think about how high-priority and fitting various questions are, think about the paths to impact, get mentorship, etc.).

But:

  • Those partial solutions require more time from senior people in EA than seems ideal.
  • The above problems still partially persist. Reasons for that might include the following:
    • There is a limited availability of time from senior EAs
    • Collections of research questions often fail to provide clear ways to determine how high-priority each question is relative to other questions, what type of person is a good fit for addressing each question, what other questions each question connects to or could be broken down into, where to start looking for resources, what the paths to impact might be, etc.
    • In general, it’s hard to get people to do something on a volunteer basis and/or with little or no mentorship/management.
    • People may not be aware of clear examples of aspiring/junior researchers who independently tackled one or more open research questions and thereby provided direct value, gained status in the EA community, had an easier time finding a good job, or similar.

10. Senior people have capacity to help, if efficiently leveraged

I think that there are probably many “senior” EAs who would be happy to put in small amounts of time to help aspiring/junior researchers.[7]

For example, I think many senior EAs might be willing to:

  • Have one or two 30 min calls with an aspiring/junior researcher
  • Add research topic ideas to a centralised resource
  • Add comments to a centralised resource about how research on a topic might inform their own future research or decisions
  • Providing some indication about how high-priority various questions seem, what type of person might be a good fit for them, etc.

Call to action

Please comment below, send me a message, or fill in this anonymous form if:

  • You disagree with any of those observations or the inferences I draw from them
  • You want to suggest additional observations or inferences related to the EA-aligned research pipeline
  • You feel that you’ve personally “suffered” in some way from the EA-aligned research pipeline being somewhat insufficient and inefficient (e.g., you’ve found it very hard to get a relevant job or usefully try out independent research, or you’re spending lots of time giving 1-1 help to aspiring/junior researchers and feel that your input could be used more efficiently)

  1. This could, for example, push in favour of more efforts to engage with and connect to various academic literatures and fields. ↩︎

  2. Indeed, I think that people (considering) spending a lot of resources trying to improve the EA-aligned research pipeline should consider doing things like conducting interviews and surveys with relevant groups, and one weakness of this sequence is that I haven’t done that myself. ↩︎

  3. Note that I'm not saying that additional funding no longer has value or that "EA no longer has any funding constraint"; we could still clearly do more with more funding. ↩︎

  4. I’m using the term “EAs” as shorthand for “People who identify or interact a lot with the EA community”; this would include some people who don’t self-identify as “an EA”. ↩︎

  5. To some extent, this is also true for jobs in general (in the big, wide, non-EA world). But it still seems worth thinking about whether and how the situation in relation to EA-aligned research could be improved. And it does seem like the situation might be unusually extreme in relation to EA-aligned research (see also After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation). ↩︎

  6. Of course, the org might just have bad judgement, not be worth funding, etc. But I think the problem is sometimes simply that it’s hard to get sufficient management capacity and/or that it’s hard for funders to vet organisations. (For further discussion of those points, see EA is vetting-constrained and Ben Todd discussing organizational capacity, infrastructure, and management bottlenecks.) ↩︎

  7. Reasons why I believe this include various conversations with senior EAs, various conversations with aspiring/junior researchers who got brief help from senior EAs, and the willingness of many senior EAs to act as mentors for various research training programs (e.g., SERI, Legal Priorities Project) and for Effective Thesis. ↩︎

Comments22
Sorted by Click to highlight new comments since: Today at 8:27 AM

Just to address point (2), the comments in "EA is vetting-constrained" suggest that EA is not that vetting-constrained:

  • Denise Melchin of Meta Fund: "My current impression for the Meta space is that we are not vetting constrained, but more mentoring/pro-active outreach constrained.... Yes, everything I said above is sadly still true. We still do not receive many applications per distribution cycle (~12)."
  • Claire Zabel of Open Philanthropy: "Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn't in vetting precisely, though that's somewhat directionally correct... Often I feel like it's an inchoate combination of something like "a person has a vague idea they need help sharpening, they need some advice about structuring the project, they need help finding a team, the case is hard to understand and think about"
  • Jan Kulveit of FHI: "as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts... Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work."

One story is, then, is that EA has successfully eliminated a previous funding bottleneck for high-quality world-saving projects. Now we have a different bottleneck - the supply of high-quality world-saving projects (and people clearly capable of carrying them out).

In a centrally planned economy like this, where the demand is artificially generated by non-market mechanisms, you'll always have either too much supply, too much demand, or a perception of compacency (where we've matched them up just right, but are disappointed that we haven't scaled them both up even more). None of those problems indicate that something is wrong. They just point to the present challenge in expanding this area of research. There will always be one challenge or another.

So how do we increase the supply of high-quality world-saving projects? Well, start by factoring projects into components:

  • A sharp, well-evaluated, timely idea with world-saving potential that also provides the team with enough social reward they're willing to take it on
  • A proven, generally competent, reliable team of experts who are available to work, committed to that idea, yet able to pivot
  • Adequate funding both for paying the team and funding their work
  • Access to outside consulting expertise
  • In many cases, significant political capital

Viewed from this perspective, it's not surprising at all that increasing the supply of such projects is vastly more difficult than increasing funding. On the other hand, this gives us many opportunities to address this challenge.

Perhaps instead of adding more projects to the list, we need to sharpen up ideas for working on them. Amateur EAs need to spend less time dreaming up novel causes/projects and more time assembling teams and making concrete plans - including for their personal finances. EAs need to spend more time building up networks of experts and government workers outside the EA movement.

I imagine that amateur EAs trying to skill up might need to make some serious sacrifices in order to gain traction. For example, they might focus on building a team to execute a project, but by necessity make the project small, temporary, and cheap. They might need to do a lot of networking and take classes, just to build up general skills and contacts, without having a particular project or idea to work on. They might need to really spend time thinking through the details of plans, without actually intending to execute them.

if I had to guess, here are some things that might benefit newer EAs who are trying to skill up:

  • Go get an MS in a hard science to gain some skill executing concrete novel projects and working in a rigorous intellectual discipline.
  • Write a book and get it published, even if it's not on anything related to EA.
  • Get an administrative volunteer position.
  • Manage a local non-EA altruistic project to improve their city.
  • Volunteer on some political campaigns.

Just to address point (2), the comments in "EA is vetting-constrained" suggest that EA is not that vetting-constrained:

I actually don't think that this is correct. 

Denise's comment does suggest that, for the meta space specifically. 

But Claire's comment seems broadly in agreement with the "vetting-constrained" view, or at least the view that that's one important constraint. Some excerpts:

Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn't in vetting precisely, though that's somewhat directionally correct. It's more like there's a distribution of projects, and we've picked some of the low-hanging fruit, and on the current margin, grantmaking in this space requires more effort per grant to feel comfortable with, either to vet (e.g. because the case is confusing, we don't know the people involved), to advise (e.g. the team is inexperienced), to refocus (e.g. we think they aren't focusing on interventions that would meet our goals, and so we need to work on sharing models until one of us is moved), or to find. [...] Overall, I think generating more experienced grantmakers/mentors for new projects is a priority for the movement." [emphasis added]

And Jan Kulveit's comment is likewise more mixed.

And several other comments mostly just agree with the "vetting-constrained" view. (People can check it out themselves.)

Of course, this doesn't prove that EA is vetting-constrained - I'm just contesting the specific claim that "the comments" on that post "suggest that EA is not that vetting-constrained". (Though I also do think that vetting is one key constraint in EA, and I have some additional evidence for that that's independent of what's already in that post and the comments there, which I could perhaps try expand on if people want.)

[comment deleted]3y2
0
0

In a centrally planned economy like this, where the demand is artificially generated by non-market mechanisms, you'll always have either too much supply, too much demand, or a perception of compacency (where we've matched them up just right, but are disappointed that we haven't scaled them both up even more). None of those problems indicate that something is wrong. They just point to the present challenge in expanding this area of research. There will always be one challenge or another.

I think there's valuable something to this point, but I don't think it's quite right. 

In particular, I think it implies the only relevant type of "demand" is that coming from funders etc., whereas I'd want to frame this in terms of ways the world could be improved. I'm not sure we could ever reach a perfect world where it seems there's 0 room for additional impactful acts, but we could clearly be in a much better world where the room/need for additional impactful act is smaller and less pressing. 

Relatedly, until we reach a far better world, it seems useful to have people regularly spotting what there's an undersupply of at the moment and thinking about how to address that. The point isn't to reach a perfect equilibrium between the resources and then stay there, but to notice which type of resource tends to be particularly useful at the moment and then focus a little more on providing/finding/using that type of resource. (Though some people should still do other things anyway, for reasons of comparative advantage, taking a portfolio approach, etc.) I like Ben Todd's comments on this sort of thing.

In particular, I think it implies the only relevant type of "demand" is that coming from funders etc., whereas I'd want to frame this in terms of ways the world could be improved.

My position is that "demand" is a word for "what people will pay you for." EA exists for a couple reasons:

  1. Some object-level problems are global externalities, and even governments face a free rider problem. Others are temporal externalities, and the present time is "free riding" on the future. Still others are problems of oppression, where morally-relevant beings are exploited in a way that exposes them to suffering.

    Free-rider problems by their nature do not generate enough demand for people to do high-quality work to solve them, relative to the expected utility of the work. This is the problem EA tackled in earlier times, when funding was the bottleneck.
  2. Even when there is demand for high-quality work on these issues, supply is inelastic. Offering to pay a lot more money doesn't generate much additional supply. This is the problem we're exploring here.

The underlying root cause is lack of self-interested demand for work on these problems, which we are trying to subsidize to correct for the shortcoming.

My position is that "demand" is a word for "what people will pay you for."

This seems reasonable (at least in an econ/business context), but I guess really what I was saying in my comment is that your previous comment seemed to me to focus on demand and supply and note that they'll pretty much always not be in perfect equilibrium, and say "None of those problems indicate that something is wrong", without noting that the thing that's wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.

I think I sort-of agree with your other two points, but I think they seem to constrain the focus to "demand" in the sense of "how much will people pay for people to work on this", and "supply" in the sense of "people who are willing and able to work on this if given money", whereas we could also think about things like what non-monetary factors drive various types of people to be willing to take the money to work on these things. 

(I'm not sure if I've expressed myself well here. I basically just have a sense that the framing you've used isn't clearly highlighting all the key things in a productive way. But I'm not sure there are actual any interesting, major disagreements here.)

Your previous comment seemed to me to focus on demand and supply and note that they'll pretty much always not be in perfect equilibrium, and say "None of those problems indicate that something is wrong", without noting that the thing that's wrong is animals suffering, people dying of malaria, the long-term future being at risk, etc.

In the context of the EA forum, I don't think it's necessary to specify that these are problems. To state it another way, there are three conditions that could exist (let's say in a given year):

  1. Grantmakers run out of money and aren't able to fund all high-quality EA projects.
  2. Grantmakers have extra money, and don't have enough high-quality EA projects to spend it on.
  3. Grantmakers have exactly enough money to fund all high-quality EA projects.

None of these situations indicate that something is wrong with the definition of "high quality EA project" that grantmakers are using. In situation (1), they are blessed with an abundance of opportunities, and the bottleneck to do even more good is funding. In situation (2), they are blessed with an abundance of cash, and the bottleneck to do even more good is the supply of high-quality projects. In situation (3), they have two bottlenecks, and would need both additional cash and additional projects in order to do more good.

No matter how many problems exist in the world (suffering, death, X-risk), some bottleneck or another will always exist. So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake. It merely indicates that this is the present bottleneck they're facing.

For the rest, I'd say that there's a difference between "willingness to work" and "likelihood of success." We're interested in the reasons for EA project supply inelasticity. Why aren't grantmakers finding high-expected-value projects when they have money to spend?

One possibility is that projects and teams to work on them aren't motivated to do so by the monetary and non-monetary rewards on the table. Perhaps if this were addressed, we'd see an increase in supply.

An alternative possibility is that high-quality ideas/teams are rare right now, and can't be had at any price grantmakers are willing or able to pay.

I think it's not especially useful to focus on the division into just those three conditions. In particular, we could also have a situation where vetting is one of the biggest constraints, and even if we're not in that situation vetting is still a constraint - it's not just about the number of high-EV projects (with a competent and willing team etc.) and the number of dollars, but also whether the grantmakers can find the high-EV projects and discriminate between them and lower-EV ones.

Relatedly, there could be a problem of grantmakers giving to things that are "actually relatively low EV" (in a way that could've been identified by a grantmaker with more relevant knowledge and more time, or using a better selection process, or something like that). 

So the simple fact that grantmakers happen to be in situation (2) does not indicate that they are doing something wrong, or making a mistake.

I think maybe there's been some confusion where you're thinking I'm saying grantmakers have "too high a bar"? I'm not saying that. (I'm agnostic on the question, and would expect it differs between grantmakers.)  

Yeah, I am worried we may be talking past each other somewhat. My takeaway from the grantmaker quotes from FHI/OpenPhil was that they don't feel they have room to grow in terms of determining the expected value of the projects they're looking at. Very prepared to change my mind on this; I'm literally just going from the quotes in the context of the post to which they were responding.

Given that assumption (that grantmakers are already doing the best they can at determining EV of projects), then I think my three categories do carve nature at the joints. But if we abandon that assumption and assume that grantmakers could improve their evaluation process, and might discover that they've been neglecting to fund some high-EV projects, then that would be a useful thing for them to discover.

Oh, I definitely don't think that grantmakers are already doing the best that could be done at determining the EV of projects. And I'd be surprised if any EA grantmaker thought that that was the case, and I don't think the above quotes say that. The three quotes you gave are essentially talking about what the biggest bottleneck is, and saying that maybe the biggest bottleneck isn't quite "vetting", which is not the same as the claim that there'd be zero value in increasing or improving vetting capacity. 

Also note that one of the three quotes still focuses on a reason why vetting may be inadequate: "as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts... Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work."

I also think that "doing the best they can at determining EV of projects" implies that the question is just whether the grantmakers' EV assessments are correct. But what's often happening is more like they either don't hear about something or (in a sense) they "don't really make an EV assessment" - because a very very quick sort of heuristic/intuitive check suggested the EV was low or simply that the EV of the project would be hard to assess (such that the EV of the grantmaker looking into it would be low). 

I think there's ample evidence that these things happen, and it's obvious that they would happen, given the huge array of projects that could be evaluated, how hard they are to evaluate, and how there are relatively few people doing those evaluations and (as Jan notes in the above quote) there is relatively little domain expertise available to them.

(None of this is intended as an insult to grantmakers. I'm not saying they're "doing a bad job", but rather simply the very weak and common-sense claim that they aren't already picking only and all the highest EV projects, partly because there aren't enough of the grantmakers to do all the evaluations, partly because some projects don't come to their attention, partly because some projects haven't yet gained sufficient credible signals of their actual EV, etc. Also none of this is saying they should simply "lower their bar".)

For one of very many data points suggesting that there is room to improve how much money can be spent and what it is spent on, and suggesting that grantmakers agree, here's a quote from Luke Muehlhauser from Open Phil regarding their AI governance grantmaking:

Unfortunately, it’s difficult to know which “intermediate goals” we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI. Would tighter regulation of AI technologies in the U.S. and Europe meaningfully reduce catastrophic risks, or would it increase them by (e.g.) privileging AI development in states that typically have lower safety standards and a less cooperative approach to technological development? Would broadly accelerating AI development increase the odds of good outcomes from transformative AI, e.g. because faster economic growth leads to more positive-sum political dynamics, or would it increase catastrophic risk, e.g. because it would leave less time to develop, test, and deploy the technical and governance solutions needed to successfully manage transformative AI? For those examples and many others, we are not just uncertain about whether pursuing a particular intermediate goal would turn out to be tractable — we are also uncertain about whether achieving the intermediate goal would be good or bad for society, in the long run. Such “sign uncertainty” can dramatically reduce the expected value of pursuing some particular goal,19 often enough for us to not prioritize that goal.20

As such, our AI governance grantmaking tends to focus on…

  • …research that may be especially helpful for learning how AI technologies may develop over time, which AI capabilities could have industrial-revolution-scale impact, and which intermediate goals would, if achieved, have a positive impact on transformative AI outcomes, e.g. via our grants to GovAI.
  • [and various other things]

So this is a case where a sort of "vetting bottleneck" could be resolved either by more grantmakers, grantmakers with more relevant expertise, or research with grantmaking-relevance. And I think that that's clearly the case in probably all EA domains (though note that I'm not claiming this is the biggest bottleneck in all domains).

Multiple comments from multiple fund managers on the EA Infrastructure Fund's recent Ask Us Anything strongly suggest they also believe there are strong vetting constraints (even if other constraints also matter a lot). 

So I'm confident that the start of your comment is incorrect in an important way about an important topic. I think I was already confident of this due to the very wide array of other indications that there are strong vetting constraints, the fact that the quotes you mention don't really indicate that "EA is not that vetting-constrained" (with the exception of Denise's comment and the meta space specifically), and the fact that other comments on the same post you're quoting comments from suggest EA is quite vetting constrained. (See my other replies for details.) But this new batch of evidence reminded me of this and made the incorrectness more salient. 

I've therefore given your comment a weak downvote. I think it'd be better if it had lower karma because I think the comment would mislead readers about an important thing (and the high karma will lend it more credence). But you were writing in good faith, you were being polite, and other things you said in the comment were more reasonable, so I refrained from a strong downvote.

(But I feel a little awkward/rude about this, hence the weird multi-paragraph explanation.)

Looking forward to hearing about those vetting constraints! Thanks for keeping the conversation going :)

To be clear, I agree that "vetting" isn't the only key bottleneck or the only thing worth increasing or improving, and that things like having more good project ideas, better teams to implement them, more training and credentials, etc. can all be very useful too. And I think it's useful to point this out.

In fact, my second section was itself only partly about vetting:

There are many orgs and funders who would be willing and able to hire or fund people to do such research if there were people who the orgs/funders could trust would do it well (and without requiring too much training or vetting). But not if the people are inexperienced, are choosing lower-priority questions, or are hard for orgs/funders to assess the skills of (see also EA is vetting-constrained and Ben Todd discussing organizational capacity, infrastructure, and management bottlenecks). [emphasis shifted]

(I also notice that some of your points sound more applicable to non-research careers. Such careers aren't the focus of this sequence, though they're of course important too, and I think some of my analysis is relevant to them too and it can be worth discussing them in the comments.)

I feel a little squeamish about this title. Perhaps it's too sensationalist. Or perhaps it gives too much of a vibe of me thinking the pipeline is terrible and the people who've build it are idiots (whereas really I just think that it's notably less good than it could be, and that that's important and worth talking about). If so, please just pretend I just used the rather blander title "Observations about the EA-aligned research pipeline" instead!

Something I don't really emphasise in this sequence is the question of how much it matters to get more quite good and fitting EA-aligned researchers vs getting more exceptionally good EA-aligned researchers. I haven't looked into that question in great detail, and am currently somewhat agnostic. You can read some related thoughts from Ben Todd here: How much do people differ in productivity?What the evidence says.

I think most of what I say in the sequence roughly applies regardless of what the correct answer to that question is. But this is partly because most of what I say is laying out broad problems, goals, intervention options, etc. - for people who are evaluating, designing, and/or implementing specific interventions, thinking more about that question seems important. 

I just listen to this again on the ea forum podcast https://open.spotify.com/episode/6i2pVhJIF0wrF2OPFjaL2n?si=e3jTibMfRY6G9r99zveFzA Having only skimmed the written version. Somehow I got more out of the audio.

Anyways, I want to add a few impressions.

I think there is some under emphasis on the extent to which “regular” researchers are, or could be induced to do things that are in fact very closely aligned to EA research priorities. Why do people get into research and academia? (Other than amenities, prestige, and to pay the bills?)

Some key reasons are 1. Love of knowledge and love of intellectual puzzles and 2. The desire to make the world a better place through the research. That at least was my impression of why people going to areas like Economics, most social sciences, Philosophy, and biology. I think that some of these researchers may have more parochial concerns, or nonutilitarian values (e.g. social justice) and not be fully convinced by the desire to maximise The long-term good for people and sentient beings everywhere. However, I think that academics and researchers tend to be much further down this road then the general public. Perhaps an interesting project (for my team in fellow travellers) could involve “convincing and communicating with academics/researchers”.

I think we can do more to leverage and “convert” research and researchers who are not affiliated with EA (yet).

https://www.lesswrong.com/posts/HDXLTFnSndhpLj2XZ/i-m-leaving-ai-alignment-you-better-stay is relevant to how independent research attempts could be improved. I describe my attempt at independent AI alignment research and how I could have done better. It applies to other fields, too.

A commenter on a draft of this post highlighted that I'd said:

For convenience, I will sometimes lump all such people together under the label “aspiring/junior researchers”. But it’s important to note that that includes several quite different groups of people, who face different pain points and would benefit from different “solutions”.

And the commenter said:

this leaves me a bit confused about/interested in on what dimensions you see there being different groups; e.g. different research domains (e.g. animal suffering vs longtermism), or something else?

This a good question. My responses:

  • The dimensions of difference I was mainly thinking of when I wrote those sentences were:
    • whether the person has never done research ("aspiring") vs having done some but not being very experienced ("junior")
    • the extent to which the points listed in section 3 apply to them (i.e., to what extent do they want to do EA-relevant research long-term, want to test their fit for doing EA-relevant research, want to do a bit of EA-relevant research in order to gain knowledge and skills that will improve their ability to do other high-impact things, or want to do a bit of EA-relevant research because that research could itself be valuable)
  • But in reality, there are many other dimensions that would be relevant when analysing issues with the EA-aligned research pipeline and prioritising and implementing solutions to it
    • Including but not limited to what cause area the person wants to focus on, what fields or methodologies they have experience with or want to use, whether they want to do research in think tanks or academia or explicitly EA orgs or elsewhere, etc.
  • (See also footnote 2)

The commenter also said:

Relatedly, I'm tracking that the term "research" itself is broad and fuzzy, and I've seen it leading to non-constructive confusion. E.g. the skill and role of "working out what strategies and interventions to pursue amongst the innumerable possibilities"* is often referred to as research within EA, but can in practice look very different what we commonly understand academic research to look like. (And in fact, I also think optimizing for one or the other can look fairly different.)

Is this a "subgroup" of people you have in mind here?
I think it might be worth clarifying this, though I acknowledge you do gesture at this a bit in (4) and maybe that's enough

*terminology stolen from your comment exchange with Owen: https://forum.effectivealtruism.org/posts/bG9ZNvSmveNwryx8b/ama-owen-cotton-barratt-rsp-director?commentId=uZQfs5mRCdEtXbAdD#comments

My reactions:

  • I do think that research borders on many other things, that "researchers" tend to also do other things, and that some other types of people also do some research-y things
  • But I'm focused in this sequence on activities and roles that are quite clearly primarily "research" or "researchers", rather than using broader senses of those terms
  • That said, much of my analysis of the problem and the possible interventions would probably also apply somewhat to other types of activities and roles
    • And more so the closer they are to being EA-aligned research activities and roles
    • But I wasn't writing with that in mind, and would've written at least somewhat different things if I was

Hello!
I've read this article as part of the EA Forum Podcast. If you wanted an audio version.

Curated and popular this week
Relevant opportunities