See the post introducing this sequence for context, caveats, credits, and links to prior discussion relevant to this sequence as a whole. This post doesn’t necessarily represent the views of my employers.
In a previous post, I highlighted some observations that I think collectively demonstrate that the current processes by which new EA-aligned research and researchers are “produced” are at least somewhat insufficient, inefficient, and prone to error. In this post, I’ll briefly discuss 19 interventions that might improve that situation. I discuss them in very roughly descending order of how important, tractable, and neglected I think each intervention is, solely from the perspective of improving the EA-aligned research pipeline. The interventions are:
- Creating, scaling, and/or improving EA-aligned research orgs
- Creating, scaling, and/or improving EA-aligned research training programs (e.g. certain types of internships or summer research fellowships)
- Increasing grantmaking capacity and/or improving grantmaking processes
- Scaling Effective Thesis, improving it, and/or creating new things sort-of like it
- Increasing and/or improving EAs’ use of non-EA options for research training, credentials, etc.
- Increasing and/or improving research by non-EAs on high-priority topics
- Creating a central, editable database to help people choose and do research projects
- Using Elicit (an automated research assistant tool) or a similar tool
- Forecasting the impact projects will have
- Adding to and/or improving options for mentorship, feedback sources, etc. (including from peers)
- Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
- Increasing and/or improving career advice and/or support with networking
- Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers
- Creating and/or improving relevant educational materials
- Creating, improving, and/or scaling market-like mechanisms for altruism (e.g., impact certificates)
- Increasing and/or improving the use of relevant online forums
- Increasing the number of EA-aligned aspiring/junior researchers
- Increasing the amount of funding available for EA-aligned research(ers)
- Discovering, writing, and/or promoting positive case studies
Feel free to skip to sections that interest you; each section should make sense by itself.
As with the rest of this sequence:
- This post is primarily intended to inform people who are helping implement or fund interventions to improve the EA-aligned research pipeline, or who could potentially do so in future
- But it may also help people who hope to themselves “enter” and “progress through” the EA-aligned research pipeline
(For illustration, I’ve added a comment below this post regarding how my own career, project, and donation decisions have been influenced by thinking about why and how the EA-aligned research pipeline should be improved.)
Caveats and clarifications
- Versions of many of these interventions already exist or have already been proposed
- There are various other ways to carve up the space of options, various complementary framings that can be useful, etc.
- Many of these interventions would also or primarily have benefits unrelated to improving the EA-aligned research pipeline
- These interventions differ in their importance (in general or for improving the EA-aligned research pipeline specifically), neglectedness, and tractability
- And I haven’t gathered systematic data on those things
- Specific versions of a given intervention, or specific combinations of those interventions, would also differ on those variables
- Some of these options interventions - or some versions of them - might not actually be worthwhile or even net-positive
- These interventions differ in which aspects of the EA-aligned research pipeline they’d (primarily) improve
- To keep this post (relatively!) brief, I don’t fully explain or justify all the points I make, nor mention all the points that come to mind regarding each intervention
- I’m happy to provide further thoughts in replies to comments
- I’m sure I’ve failed to mention some promising intervention options, and I’d welcome comments that mention additional ideas (whether they’re the commenter’s own idea or something that has been proposed elsewhere)
The intervention options
Creating, scaling, and/or improving EA-aligned research orgs
- It seems like EA-aligned research orgs should house a substantial fraction of EA-aligned researchers and handle a substantial fraction of vetting, training, etc. for aspiring/junior EA-aligned researchers
- Though not necessarily the majority; there is also a place for grantmakers, non-EA orgs, independent research, etc.
- There would be more capacity for that if EA-aligned research orgs were more numerous, larger, and/or better
- Some things that would help us move towards that situation include:
- Org leadership teams consciously thinking about how they can scale gracefully yet relatively quickly, designing their systems and strategies around that, building strong operations teams, hiring with that in mind (e.g., looking for people who could in future manage other hires), and providing staff with opportunities to build their management skills
- Individuals trying to build skills in or pursue roles related to management, mentorship, or perhaps operations, and perhaps considering founding new orgs
- Funders could try to fund orgs which seem likely to scale gracefully yet relatively quickly (if given more funding), fund the creation of new orgs (especially those that could scale well), and engage in “active funding” to create more such funding opportunities (see also field building)
- See also:
Creating, scaling, and/or improving EA-aligned research training programs
- See here for posts relevant to this topic, and here for a list of such programs
- These programs include things like research internships, summer research fellowships, and some volunteering programs
- Examples of efforts to create, scale, and/or improve such programs include:
- The creation of SERI
- The SERI team’s efforts to encourage and support the creation of programs similar to themselves
- My creation of a Slack workspace for people who are (or are planning to be) involved in organising such programs to exchange resources and idas, ask questions, share resources, etc.
- For one attempt to assess the impact of such a program, see Review of FHI's Summer Research Fellowship 2020
Increasing grantmaking capacity and/or improving grantmaking processes
- By “grantmaking capacity”, I mean the collective capacity grantmakers and others have to evaluate and/or create funding opportunities
- I don’t mean available funding; I have a separate section below on increasing available funding
- Relevant individuals include people who work as grantmakers, other people who give donation recommendations, and people who make decisions about where to donate their own money
- Ways grantmaking capacity could be increased include hiring or training new grantmakers, increasing the time spent on grantmaking by people who currently do it part-time, distributing funding to other individuals for regranting, and creating or scaling charity evaluators
- Increasing grantmaking capacity and/or improving grantmaking processes could improve the EA-aligned research pipeline by increasing the amount and efficiency of financial support for aspiring/junior researchers and/or for work on any of the other interventions discussed in this post
- See also Benjamin Todd on what the effective altruism community most needs
Scaling Effective Thesis, improving it, and/or creating new things sort-of like it
- I have a quite positive impression Effective Thesis
- I tentatively think it’d be good for Effective Thesis to expand in some way, and/or for additional things sort-of like Effective Thesis to be created
- But I haven’t really thought about this much yet, and so:
- For all I know, it might be the case that Effective Thesis are already doing most of the most valuable and tractable things in this space
- I’m not really sure what, specifically, scaling, improving, or creating new things sort-of like Effective Thesis should look like
- If this involves new orgs/projects, they could try somewhat different strategies and approaches to those used by Effective Thesis, or specialise more for particular user groups or topic areas
- And they could share resources and learnings with Effective Thesis, and vice versa
- (My understanding is that this would be analogous to the current situation in the EA-aligned career advice space, where the relevant organisations include 80,000 Hours, Animal Advocacy Careers, and Probably Good)
- If this involves new orgs/projects, they could try somewhat different strategies and approaches to those used by Effective Thesis, or specialise more for particular user groups or topic areas
Increasing and/or improving EAs’ use of non-EA options for research-relevant training, credentials, testing fit, etc.
- The next post in this sequence will focus on this idea, so I won’t discuss it here
Increasing and/or improving research by non-EAs on high-priority topics
- See also field building
- On a somewhat abstract level, this could be done through things like:
- Increasing awareness of and inclination towards these topics among non-EAs
- Funding work on these topics
- Funding the creation of non-EA orgs, institutes, etc. focused on these topics (e.g., CSET)
- Making it (seem) easier to publish respectable papers on these topics
- Running conferences or workshops on these topics
- Increasing interactions between EA and non-EA researchers
- Providing guidance to non-EA research on these topics
- Shifting academic norms and incentives towards choosing research for its impact potential
- More concretely, this could be done through things like:
- Organising workshops on the topic
- Publishing papers on a high-priority topic (which could raise the topic’s salience, make publishing on it seem more acceptable, give people things to cite)
- Inviting non-EAs to visit EA research institutes/orgs
- Providing the kind of resources and coaching Effective Thesis provides
- Scoping EA-aligned research directions in a way that makes them easier for people working in traditional academia to learn about, see the relevance of, connect to established disciplines, and work on
- The GovAI and GPI research agendas could be seen as two examples of this sort of effort
- Creating prizes or awards for the best research on a topic, and trying to make the prize/award sufficiently large, prestigious, and well-advertised in relevant places that top or promising non-EA researchers are drawn towards it
- In addition to improve the pipeline for EA-aligned research produced by non-EAs, this might also improve the pipeline for EA-aligned researchers, such as by:
- Causing longer-term shifts in the views of some of the non-EAs reached
- Making it easier for EAs’ to use non-EA options for research training, credentials, etc. (see my next post)
- And these benefits could perhaps be huge, as the vast majority of all research talent, funding, hours, etc. are outside of EA
- On the other hand, it may be less tractable for “us” to increase and/or improve that pool of talent, funding, hours, etc., compared to doing so for the EA pool
Creating a central, editable database to help people choose and do research projects
- The sixth post in this sequence will focus on this idea
Using Elicit (an automated research assistant tool) or a similar tool
- The sixth post in this sequence will discuss this idea
Forecasting the impact projects will have
- The sixth post in this sequence will discuss this idea
Adding to and/or improving options for collaborations, mentorship, feedback, etc. (including from peers)
This could include things like:
- Encouraging and facilitating aspiring/junior researchers in connecting with each other to get feedback on plans, get feedback on drafts, collaborate, start coworking teams, and run focused practice sessions
- E.g., creating spaces like Effective Altruism Editing and Review
- E.g., circulating advice and links like those contained in Notes on EA-related research, writing, testing fit, learning, and the Forum
- E.g., perhaps, creating platforms like Impact CoLabs
- (Those are just the first three examples that came to mind; there are probably other, quite different ways to achieve this goal)
- Encouraging and facilitating aspiring/junior researchers and more experienced researchers to connect in similar ways
- This could involve the aspiring/junior researcher acting as a research assistant
- This could involve the more experienced researchers delegating some research tasks/projects that they wanted done anyway
- This could help align the incentives of the more and less experienced researchers, including incentivising high-quality feedback
- This could be paid or unpaid (i.e., volunteering)
- One example of a project that arguably serves this purpose is READI
- Creating, promoting, and/or engaging with resources on how to more efficiently and effectively seek or provide mentorship, feedback, etc.
- E.g., writing posts like Giving and receiving feedback and Asking for advice
- E.g., participating in a (non-EA) course on mentorship, coaching, or management, in order to then be better at providing those services to aspiring/junior researchers
Improving the vetting of (potential) researchers, and/or better “sharing” that vetting
- Improving selection processes at EA-aligned research organisations
- Increasing the number and usefulness of referrals of candidates from one selection process (e.g., for a job or a grant) to another selection process.
- This already happens, but could perhaps be improved by:
- Increasing how often it happens
- Increasing how well-targeted the referrals are
- Increasing the amount of information provided to the second selection process?
- Increasing how much of the second selection process the candidate can “skip”?
- This already happens, but could perhaps be improved by:
- Creating something like a "Triplebyte for EA researchers", which could scalably evaluate aspiring/junior researchers, identify talented/promising ones, and then recommend them to hirers/grantmakers
- This could resolve most of the vetting constraints if it could operate efficiently and was trusted by the relevant hirers/grantmakers
Increasing and/or improving career advice and/or support with network-building
Examples of existing efforts along these lines include:
- 80,000 Hours
- Animal Advocacy Careers
- Probably Good
- Many local EA groups
- Parts of what the Improving Institutional Decision-Making working group and the Simon Institute for Longterm Governance do
- In particular, my understanding is that these groups help provide some career advice and connections in their particular areas of expertise
Reducing the financial costs of testing fit and building knowledge & skills for EA-aligned research careers
- For example, CEEALAR (formerly the EA Hotel) provides free or cheap accommodation and board to people engaging in these sorts of activities
- One could set up similar things in other locations, or find other ways to reduce the financial costs of taking time to engage in these activities
- Note that here I don’t mean providing funding to support people in doing these activities
- That also seems valuable, but is covered in other sections of this post
Creating and/or improving relevant educational materials
- Such materials could include courses, workshops, textbooks, standalone writings that are shorter than textbooks (e.g., posts), or sequences of such shorter writings
- Existing examples include Charity Entrepreneurship’s writings about their research process, parts of Charity Entrepreneurship’s handbook, posts tagged Research methods, and posts tagged Scholarship & Learning
- Topics these materials could focus on include on doing research in general, aspects of doing EA-aligned research that differ from research in other contexts, EA-aligned research using particular disciplines or methodologies, or research on particular EA-relevant topics
- These materials could be created by EAs, adapted by EAs from existing things, or commissioned by EAs by created by other people
- (Of course, non-EAs left to their own device also make many relevant materials; on how that could be used, see “Increasing and/or improving EAs’ use of non-EA options for research training, credentials, etc.”)
Creating, improving, and/or scaling market-like mechanisms for altruism
- See Markets for altruism and Certificates of impact
- This could potentially have benefits such as improving prioritisation and providing a more efficient and scalable system of vetting research projects for funding
Increasing and/or improving the use of relevant online forums
- I think many aspiring/junior researchers would benefit from using the EA Forum and/or LessWrong to:
- learn about important ideas
- discover or think of research questions
- find motivation and a sense of accountability for doing research and writing (since they’re doing it for an actual audience)
- disseminate their findings/ideas
- get feedback
- find collaborators
- form connections
- See also Reasons for and against posting on the EA Forum
- I also think it would be possible and valuable to increase how often these sites are used - and how useful they are - for those purposes
- It seems to me that impressive increases have already occurred since late 2018 (when I first started looked at the Forum)
- Increasing the usefulness of these sites could include things like adding new features or integrating these sites with other interventions for improving the EA-aligned research pipeline (e.g., the database idea discussed in my next post)
- (But here I should again note that, as with all interventions mentioned in this post, this wouldn’t address all the current imperfections in the EA-aligned research pipeline, nor render all the other interventions unnecessary)
- My post Notes on EA-related research, writing, testing fit, learning, and the Forum is an example of an effort to increase and improve the use of relevant online forums, and also links to other examples of such an effort
Increasing the number of EA-aligned aspiring/junior researchers
- The number of people “entering” or “in” the pipeline doesn’t seem to be as important a bottleneck as some other things (e.g., people with the specific skills necessary for specific projects, capacity to train more such people, capacity to put those people to good use, and capacity to vet people/projects; see Todd, 2020)
- But more people in the pipeline would still likely lead to:
- more people eventually becoming useful EA-aligned researchers
- more fitting people being selected for the EA-aligned research roles/funding that would’ve been available anyway (since there’s a larger pool of people to select from; see also How replaceable are the top candidates in large hiring rounds?)
- On the other hand, this comes at the opportunity cost of whatever else these people would’ve spent their time on otherwise
- Additionally, more people in the pipeline might have negative consequences other than opportunity cost, such as:
- People being turned off EA more generally because of frustration over repeatedly being declined jobs or funding (whereas those people may have found more success in other paths)
- Making various forms of coordination, cooperation, and trust harder or less valuable
- Leading to more low-quality or incautious work or messaging, reducing the credibility of EA-aligned research communities
- (See also value of movement growth)
Increasing the amount of funding available for EA-aligned research(ers)
- As with the number of EA-aligned aspiring/junior researchers, funding for EA-aligned research(ers) doesn’t seem to be as important a bottleneck as some other things, but more funding would still help
- Here I’m talking about increasing the funding available for activities whose primary goal is relatively directly leading to directly valuable research
- In contrast, increasing the funding available for activities whose primary goal is improving the EA-aligned research pipeline - e.g., by supporting one of the interventions in this post - may better target the key bottlenecks and thus be more valuable
- (Of course, many activities may have both types of goals, and sometimes with roughly equal weight)
- I’m also only talking about the amount of funding available, not about how much high-priority research actually gets funded, since the latter also depends on other things such as grantmaking capacity and what projects/people are available to be funded
Discovering, writing, and/or promoting positive case studies
- Discussed in a comment below this post
If you have thoughts on these interventions or other interventions to achieve a similar goal, or would be interested in supporting such interventions with your time or money, please comment below, send me a message, or fill in this anonymous form. This could perhaps inform my future efforts, allow me to connect you with other people you could collaborate with or fund, etc.
Though it’s hard to even say what that means, let alone how much anyone should trust my quick rankings; see also the “Caveats and clarifications” section. ↩︎
Note that even good things can be made better! ↩︎
I’m using the term “EAs” as shorthand for “People who identify or interact a lot with the EA community”; this would include some people who don’t self-identify as “an EA”. ↩︎
For example, one could view each of these intervention options through the lens of creating and/or improving “hierarchical network structures” (see What to do with people?). ↩︎
But I think it would be possible and valuable to do so. E.g., one could find many examples of people who were hired as a researcher at an EA-aligned org, went through an EA-aligned research training program, or did a PhD under a non-EA supervisor; look at what they’ve done since then; and try to compare that to some reasonable guesses about the counterfactual and/or people who seemed similar but didn’t have those experiences. (I know of at least one attempt to do roughly this.) It would of course be hard to be confident about causation and generalisability, but I think we’d still learn more than we know now. ↩︎
For example, creating, scaling, and/or improving EA-aligned research organisations and doing the same for EA-aligned research training programs might be complementary goods; more of the former means more permanent openings for the “graduates” of those programs, and more of the latter means more skilled, motivated, and vetted candidates for those orgs. ↩︎
For convenience, I’ll sometimes lump various different types of people together under the label “aspiring/junior researchers”. I say more about this group of people in a previous post of this sequence. ↩︎
See “active funding”. See also field building. ↩︎
This is based on reading some of what they’ve written about their activities, strategy, and impact assessment; talking to people involved in the project; and my more general thinking about what the EA-aligned research pipeline needs. But I haven’t been an Effective Thesis coach or mentee myself, nor have I tried to carefully evaluate their impact. ↩︎
The original Director of CSET and several of its staff have been involved in the EA community, but many other members of staff are not involved in EA. ↩︎
See, for example, Learnings about literature review strategy from research practice sessions. ↩︎
This idea was suggested as a possibility by Peter Hurford. See some thoughts on the idea here. ↩︎
I’m grateful to Edo Arad for suggesting I include roughly this intervention idea. ↩︎
You can update the EA CoLabs link (under Adding to and/or improving options...) with their website (Impact Colabs) which is a more functional update to this I think.
Thanks - done :)
Some quick notes on how my own career, project, and donation decisions have been influenced by thinking about the value of and methods for improving the EA-aligned research pipeline
(Note that most of these decisions were made before I drafted this sequence of posts, and thus weren’t based on my latest thinking. Also, I am likely missing some relevant things and will fail to explain some things well. Finally, as usual, this comment expresses my personal views only.)
Additional intervention ideas
Here I’ll keep track of additional intervention ideas that have occurred to me since I finished drafting this post. Perhaps in future I’ll integrate some into the post itself.
Rough notes on another idea, following a call I just had:
An idea from Buck (see also the comments on the linked shortform itself):
An idea from Linch:
(See also the comments on the shortform.)
Complementary perspectives/framings that didn’t quite fit into this post
David Janku of Effective Thesis has written about interventions - other than Effective Thesis which also aim to influence which research is generated. I recommend reading that section, but here’s the list of interventions with the explanations and commentary removed:
David adds that an additional approach which doesn't aim to influence which research is generated is “coordination - e.g. connecting students/researchers interested in the same topics”.
Meanwhile, Jonas Vollmer of EA Funds has written that, to achieve one possible vision for the EA Long-Term Future Fund:
I think that similar points could also be made for longtermist grantmaking by other actors (e.g., Open Philanthropy) and for grantmaking in some other areas (e.g., I’m guessing, wild animal welfare). And I think many of the interventions mentioned in this post might help address those needs.
Here are my thoughts on discovering, writing, and/or promoting positive case studies (moved to a comment since I tentatively think this intervention would be less valuable than the others):
Readers of this post may also be interested in my rough collection of Readings and notes on how to do high-impact research.
Just came here to comment something that's been on my mind that I didn't recall being suggested in the post, though it partly overlaps with your suggestions 1, 2, 4, 11, and 19.
Suggestion: Paid literature reviews with some (relatively low level) supervision.
Context: Since working at Sentience Institute, I've done quite a few literature reviews. (I've also done some more "rough and ready" ones at Animal Advocacy Careers.) I think that these have given me a much better understanding of how social sciences academia works, what sort of information is most helpful etc. A lot of the knowledge comes in handy in places that I wouldn't necessarily have predicted, too. This makes me feel like the benefits might be comparable to the sorts of benefits that I expect lots of people get from PhDs -- some methodological training / familiarity, and some useful knowledge. It wouldn't give you some benefits of PhDs like signalling value, familiarity with the peer review process, or close mentorship relationships, but if you tried to get the literature reviews published in peer-reviewed journals, then that would add some of those benefits back in (and maybe help to improve the end product too).
Lit reviews can be quite time-consuming, but don't necessarily require any very special skills -- just willingness to spend time on it and look things up (e.g. methodological aspects) when you don't know or understand them, rather than plowing on regardless. Obviously some methodological background in the topic would be helpful, but doesn't always seem necessary; I'm a history grad and have done literature reviews on subjects from psychology to ethics to management.
It might be quite easy to explicitly offer (1) funding and (2) facilitation for independent researchers to be connected to potential reviewers of the end product. It could be up to the individual to suggest topics, or to some centralised body (as in your suggestion 7).
I'm not sure whose responsibility this should be. It could be EA Funds, Effective Thesis, or individual research orgs.
Thanks! Yeah, this seems like a handy idea.
I was recently reminded of the "Take action" / "Get involved" page on effectivealtruism.org, and I now see that that actually includes a page on Write a literature review or meta-analysis. That Take action page seems useful, and should maybe be highlighted more often. In retrospect, I probably should've linked to various bits of it from this post.
True! I'd forgotten about that page. I think some sort of fairly minimal infrastructure might notably increase the number of people actually doing it though.
(Yeah, I didn't mean that this meant your comment wasn't useful or that it wouldn't be a good idea to set up some sort of intervention to support this idea. I do hope someone sets up such an intervention, and I may try to help that happen sometime in future if I get more time or think of a particularly easy and high-leverage way to do so.)
Notably missing from this list, but related to 5,11, and 17 (and arguably 1 and 18) is increasing the number and EA alignment of currently non-EA or weakly EA-aligned senior researchers.
That is, increasing the number of senior EA aligned researchers not via the pipeline of
get interested in EA-> be a junior EA researcher -> be a intermediate EA researcher -> be a senior EA researcher,
be a senior researcher -> get interested in EA -> be a senior EA researcher.
I don't have very obvious examples in mind, but potential case studies so far include Phillip Tetlock, David Roodman, Rachel Glennester, Michael Kremer, Kevin Esvelt, and Stuart Russell.
Yeah, I think this is a quite important point that's sort-of captured by the other paths you mention, but (in hindsight) not sufficiently highlighted/emphasised.
I think another possible example is Allan Dafoe - I don't know his full "origin story", and it's possible he was already very EA-aligned as a junior researcher, but I think his actual topic selection and who he worked with switched quite a lot (and in an EA-aligned direction) after he was already fairly senior. And that seniority allowed him to play a key role in GovAI, which was (in my view) extremely valuable.
One place where I kind-of nod to the path you mention is:
I don't think Alan's really an example of this.
I think that quote makes it sound like Allan already had a similar worldview and cause prioritisation to EA, but wasn't aware of or engaged with the EA community (though he doesn't explicitly say that), and so he still seems like sort-of an example.
It also sounds like he wasn't actively and individually reached out to by a person from the EA community, but rather just found relevant resources himself and then reached out (to Bostrom). But that still seems like it fits the sort of thing Linch is talking about - in this case, maybe the "intervention (for improving the EA-aligned research pipeline)" was something like Bostrom's public writing and talks, which gave Allan a window into this community, which he then joined. And that seems like a good example of a field building intervention?
(But that's just going from that quote and my vague knowledge of Allan.)
Fair enough. I guess just depends on exactly how broad/narrow of a category Linch was gesturing at.
I think the crux to me is to what extent Allan's involvement in EAish AI governance is overdetermined. If, in a world with 75% less public writings on transformative AI of Bostrom's calibre, Allan would still be involved in EAish AI governance, then this would point against the usefulness of this step in the pipeline (at least with the Allan anecdote).
I roughly agree, though would also note that the step could be useful by merely speeding up an overdetermined career move, e.g. if Allan would've ended up doing similar stuff anyway but only 5 years later.
Yes, I agree that speeding up career moves is useful.
In the EA Infrastructure Fund's Ask Us Anything, I asked for their thoughts on the sorts of topics covered in this sequence, e.g. their thoughts on the intervention options mentioned in this post. I'll quote Buck's interesting reply in full. See here for precisely what I asked and for replies to Buck's reply (including me agreeing or pushing back on some things).
"Re your 19 interventions, here are my quick takes on all of them
Yes I am in favor of this, and my day job is helping to run a new org that aspires to be a scalable EA-aligned research org.
I am in favor of this. I think one of the biggest bottlenecks here is finding people who are willing to mentor people in research. My current guess is that EAs who work as researchers should be more willing to mentor people in research, eg by mentoring people for an hour or two a week on projects that the mentor finds inside-view interesting (and therefore will be actually bought in to helping with). I think that in situations like this, it's very helpful for the mentor to be judged as Andrew Grov suggests, by the output of their organization + the output of neighboring organizations under their influence. That is, they should think that one of their key goals with their research interns as having the research interns do things that they actually think are useful. I think that not having this goal makes it much more tempting for the mentors to kind of snooze on the job and not really try to make the experience useful.
Yeah this seems good if you can do it, but I don't think this is that much of the bottleneck on research. It doesn't take very much time to evaluate a grant for someone to do research compared to how much time it takes to mentor them.
My current unconfident position is that I am very enthusiastic about funding people to do research if they have someone who wants to mentor them and be held somewhat accountable for whether they do anything useful. And so I'd love to get more grant applications from people describing their research proposal and saying who their mentor is; I can make that grant in like two hours (30 mins to talk to the grantee, 30 mins to talk to the mentor, 60 mins overhead). If the grants are for 4 months, then I can spend five hours a week and do all the grantmaking for 40 people. This feels pretty leveraged to me and I am happy to spend that time, and therefore I don't feel much need to scale this up more.
I think that grantmaking capacity is more of a bottleneck for things other than research output.
I don't immediately feel excited by this for longtermist research; I wouldn't be surprised if it's good for animal welfare stuff but I'm not qualified to judge. I think that most research areas relevant to longtermism require high context in order to contribute to, and I don't think that pushing people in the direction of good thesis topics is very likely to produce extremely useful research.
I'm not confident.
The post doesn't seem to exist yet so idk
I think that it is quite hard to get non-EAs to do highly leveraged research of interest to EAs. I am not aware of many examples of it happening. (I actually can't think of any offhand.) I think this is bottlenecked on EA having more problems that are well scoped and explained and can be handed off to less aligned people. I'm excited about work like The case for aligning narrowly superhuman models, because I think that this kind of work might make it easier to cause less aligned people to do useful stuff.
I feel pessimistic; I don't think that this is the bottleneck. I think that people doing research projects without mentors is much worse, and if we had solved that problem, then we wouldn't need this database as much. This database is mostly helpful in the very-little-supervision world, and so doesn't seem like the key thing to work on.
I feel pessimistic, but idk maybe elicit is really amazing. (It seems at least pretty cool to me, but idk how useful it is.) Seems like if it's amazing we should expect it to be extremely commercially successful; I think I'll wait to see if I'm hearing people rave about it and then try it if so.
I think this is worth doing to some extent, obviously; I think that my guess is that EAs aren't as into forecasting as they should be (including me unfortunately.) I'd need to know your specific proposal in order to have more specific thoughts.
I think that facilitating junior researchers to connect with each other is somewhat good but doesn't seem as good as having them connect more with senior researchers somehow.
I'm into this. I designed a noticeable fraction of the Triplebyte interview at one point (and delivered it hundreds of times); I wonder whether I should try making up an EA interview.
Seems cool. I think a major bottleneck here is people who are extremely extroverted and have lots of background and are willing to spend a huge amount of time talking to a huge amount of people. I think that the job "spend many hours a day talking to EAs who aren't as well connected as would be ideal for 30 minutes each, in the hope of answering their questions and connecting them to people and encouraging them" is not as good as what I'm currently doing with my time, but it feels like a tempting alternative.
I am excited for people trying to organize retreats where they invite a mix of highly-connected senior researchers and junior researchers to one place to talk about things. I would be excited to receive grant applications for things like this.
I'm not sure that this is better than providing funding to people, though it's worth considering. I'm worried that it has some bad selection effects, where the most promising people are more likely to have money that they can spend living in closer proximity to EA hubs (and are more likely to have other sources of funding) and so the cheapo EA accommodations end up filtering for people who aren't as promising.
Another way of putting this is that I think it's kind of unhealthy to have a bunch of people floating around trying unsuccessfully to get into EA research; I'd rather they tried to get funding to try it really hard for a while, and if it doesn't go well, they have a clean break from the attempt and then try to do one of the many other useful things they could do with their lives, rather than slowly giving up over the course of years and infecting everyone else with despair.
I'm not sure; seems worth people making some materials, but I'd think that we should mostly be relying on materials not produced by EAs
I am a total sucker for this stuff, and would love to make it happen; I don't think it's a very leveraged way of working on increasing the EA-aligned research pipeline though.
Yeah I'm into this; I think that strong web developers should consider reaching out to LessWrong and saying "hey do you want to hire me to make your site better".
I think Ben Todd is wrong here. I think that the number of extremely promising junior researchers is totally a bottleneck and we totally have mentorship capacity for them. For example, I have twice run across undergrads at EA Global who I was immediately extremely impressed by and wanted to hire (they both did MIRI internships and have IMO very impactful roles (not at MIRI) now). I think that I would happily spend ten hours a week managing three more of these people, and the bottleneck here is just that I don't know many new people who are that talented (and to a lesser extent, who want to grow in the ways that align with my interests).
I think that increasing the number of people who are eg top 25% of research ability among Stanford undergrads is less helpful, because more of the bottleneck for these people is mentorship capacity. Though I'd still love to have more of these people. I think that I want people who are between 25th and 90th percentile intellectual promisingness among top schools to try first to acquire some specific and useful skill (like programming really well, or doing machine learning, or doing biology literature reviews, or clearly synthesizing disparate and confusing arguments), because they can learn these skills without needing as much mentorship from senior researchers and then they have more of a value proposition to those senior researchers later.
This seems almost entirely useless; I don't think this would help at all.
Seems like a good use of someone's time.
This was a pretty good list of suggestions. I guess my takeaways from this are:
Thanks, I think this is a great topic and this seems like a useful list (although I do find reading through 19 different types of options without much structure a bit overwhelming!).
I'll just ~repost a private comment I made before.
This feels like an especially promising area to me. I'd guess there are lots of cases where this would be very beneficial for the junior researcher and at least a bit beneficial for the experienced researcher. It just needs facilitation (or something else, e.g. a culture change where people try harder to make this happen themselves, some strong public encouragement to juniors to make this happen, ...).
This isn't based on really strong evidence, maybe mostly my own (limited) experience + assuming at least some experienced researchers are similar to me. And that there are lots of excellent junior researcher candidates out there (again from first hand impressions).
This also seems like a big deal and an area where maybe you could improve things significantly with a relatively small amount of effort. I don't have great context here though.
Thanks for these thoughts!
Interesting. I received similar feedback on the previous post in the sequence, and re-organised it into "clusters" in response to that. And I've received similar feedback on a separate, upcoming draft of mine that also has a big list of things, and due to that feedback I plan to organise that list into clusters before publishing the post. Maybe this is a recurring issue with my writing that I should be on the lookout for. So thanks for that feedback :)
I guess this also relates to my caveat that "There are various other ways to carve up the space of options, various complementary framings that can be useful, etc.", and to me trying to produce these posts relatively quickly and to be relatively thorough. I expect with more time, I could come up with better ways to organise the space of options - e.g. via creating diagrams representing various different pathways to getting more EA-aligned research or researchers, showing how each intervention could connect to one or more steps on those pathways, and then somehow using that to organise the interventions into broad types and then subtypes. (And if someone else did that, I'd be interested to read what they come up with!)
One (maybe?) low-effort thing that could be nice would be saying "these are my top 5" or "these are listed in order of how promising I think they are" or something (you may well have done that already and I missed it).
Ah, yes, this is probably useful and definitely low-effort (I've now done it in 1 minute, due to your comment).
The list was actually already in order of how promising I think they are, and I mentioned that in footnote 1. But I shouldn't expect people to read footnotes, and your feedback plus that other feedback I got on other posts suggests that readers want that sort of thing enough / find it useful enough that that should be said in the main text. So I've now moved that info to the main text (in the summary, before I list the 19 interventions).
I think the main reason I originally put it in a footnote is that it's hard to know what my ranking really means (since each intervention could be done in many different ways, which would vary in their value) or how much to trust it. But my ranking is still probably better than the ranking a reader would form, or than an absence of ranking, given that I've spent more time thinking about this. Going forward, I'll be more inclined to just clearly tell readers things like my ranking, and less focused on avoiding "anchoring" them or things like that.
(So thanks again for the feedback!)