Hide table of contents

Quite wonderfully, there has been a proliferation of research questions EAs have identified as potentially worth pursuing, and now even a proliferation of collections of such questions. So like a good little EA, I’ve gone meta: this post is a collection of all such collections I’m aware of. I hope this can serve as a central directory to all of those other useful resources, and thereby help interested EAs find questions they can investigate to help inform our whole community’s efforts to do good better.

Please also leave a comment if you know of any research question collections that aren't yet included in this post!

Some things to note:

  • It may be best to just engage with one set of questions that are relevant to your skills, interests, or plans, and ignore the rest of this post.
  • It’s possible that some of these questions are no longer “open”.
  • I’ve included some things that aren’t explicitly written as collections of research questions, as long as research questions could very easily be inferred from them (e.g., from the problems people identify, or the posts people want written).

Most recent update to this post (typically adding a link to another collection): December 2022.

Various EA-related topics

Mostly focused on longtermism, existential risks, or GCRs

Mostly focused on AI

Technical/theoretical AI safety/alignment

AI policy/strategy/governance

A mix of technical and non-technical

Mostly focused on biorisk or coronavirus

Cause prioritisation/global priorities

Animal welfare

(Perhaps Charity Entrepreneurship and Rethink Priorities have relevant collections or research agendas?)

Global health and development

Important unresolved research questions relevant to macroeconomic policy - Open Philanthropy Project, 2014

(Perhaps Charity Entrepreneurship and GiveWell have relevant collections or research agendas?)

Other areas many EAs are interested in

Forecasting & improving institutional decision-making

Rationality

Mental health, happiness, etc.

Other?

I’d guess there are other relevant areas for which research questions have been collected somewhere.

Potential lists of lists which I haven’t properly taken the lists from yet


Thanks to all the people who created all the lists I’ve shown and/or taken from here. And thanks to Aaron Gertler for his above-quoted thoughts, to David Kristoffersson for helpful feedback and additions, and to Remmelt Ellen for helpful comments.

This post is related to my work with Convergence Analysis.

Comments27


Sorted by Click to highlight new comments since:

This was a very practical post. I return to it from time to time to guide my thinking on what to research next. I suggest it to people to consider. I think about ways to build on the work and develop a database. I think that it may have helped to catalyse a lot of good outcomes.

Hey, thanks for putting this together. I think it would be quite valuable to have these lists be put up on Effective Thesis's research agenda page. My reasoning for this is that Effective Thesis's research agenda page probably has more viewers than this EA Forum post or the Google Doc version of this post.

Additionally, if you agree with the above, I'd be curious to hear your thoughts on how we could make Effective Thesis's research agenda page open source?

I think those are both good ideas! (This is assuming that by "open source" you mean something like "easy for anyone to make suggestions to, in a way that lets the page be efficiently expanded and updated". Did you have something else in mind?)

I don't know the Effective Thesis people personally (though what they're doing seems really valuable to me). But I've now contacted them via their website, with a message quoting your comment and asking for their thoughts. 

Yep, that's what I meant by "open source"! Awesome to hear you're taking this forward!

Update: Effective Thesis have now basically done both of the things you suggested (you can see the changes here). So thanks for the suggestions!

Glad to hear this!

For calibration, so far no one has contacted me to take on one of the research projects in the list of concrete researchy projects. And even in 1-1s, with people that are interested in joining EA Israel and are interested in taking on research project, it had very limited success going over this list and thinking together about possible research questions. 

(Upvoted)

Yeah, I've seen that sort of thing mentioned a few times, such that I no longer find it surprising, though I initially did, and I still don't fully understand why it's the case.*

That's why I included "I think we could do more to inspire and support people to actually investigate these questions than just assemble a big list", and the points after that. But I'd definitely be keen to hear more thoughts on how to provide effective inspiration and support for that. (Indeed, it seems that could be a research question in itself. Now, if only we could inspire and support people to investigate it...)

*It does seem there are a lot of interesting and important questions to be explored, many of which may not require extremely specialised skills. As well as a lot of intellectually curious, research-minded EAs interested in having more EA-y things to do. So my guess before hearing that sort of thing mentioned a few times probably would've been that there'd be more uptake of these sorts of lists, and I'm not entirely sure what ingredients are missing.

Obviously payment and organisational infrastructures would be very helpful for most people, and necessary for many. But I wouldn't guess they'd be necessary for all curious EAs with some slices of free time? I wonder if there are other levers that could be pulled to unlock some of this extra talent that seems to be floating around?

My current model is something like this. #BetterWrongThanVague

It is difficult to make noticeable research contribution. Even small incremental steps can be intimidating and time consuming. 

It is hard to motivate oneself to work alone on someone else's problems. I think that most people probably have their own passions and model of what's important, and it's unclear why subquestion 3.5.1 should be the single thing that they focus on.
Three of the main motivators that might mitigate that here are recognition for completing the work well and presenting something interesting, better career capital (learning something new or displaying skills) and socializing/partnering.

One thing which I thought about trying which might be related is to take on a small scale research problem and set up an open call to globally collaborate on this. To make it successful, we can set up something formal that some organisation is interested in this result (and better yet, possibly supply a prize - doesn't have to be monetary) and coordinate with local groups to collect an initial team. 

That could be fun and engaging, but I'm not sure how scalable this is and how much impact we can expect from that (which is uncertainty probably worth of testing out). I've tried to start a small ALLFED-directed research group locally, as part of our research team, but that also didn't work out. I think that going global might possibly work though.

Noticeable lack of Global Health and Development lists/topics, particularly as this is where most of the individual giving where most of the individual EA giving is going. Hope I can help with this at some point.

I still think this was a useful post. It's one of many posts of mine that seem like they were somewhat obvious low-hanging fruit that other people could've plucked too; more thoughts on that in other self-reviews here and here

That said, I also think that, at least as far as I'm aware, this post has been less impactful and less used than I'd have guessed. I'm not actually aware of any instances where I know for sure that someone used this post to pick a research project and then followed it through to completion. I am aware of two other well-received and high-quality-seeming research question collection posts that were substantially influenced by this one (by 80k and by Lizka), but I'm also not aware of clear signs of impact from those either. This seems kinda surprising and sad.

I wrote some thoughts on why that might've happened, and what it might suggest we need to do to improve the EA-aligned research pipeline, in this sequence and in a draft "Proposal: A central, editable database to help people choose and do research projects". But I no longer plan to publish the latter draft or to strongly encourage people to work on that idea. (I can expand on why if people are interested.) I'm now focused more on other ways to improve the EA-aligned research pipeline, like causing there to be more available high-quality mentorship via research training programs, scaling EA research orgs, and sharing tips and resources on how to do research and how to do management and stuff. 

Thanks so much for this! I am keen to discuss this when Covid-19 has passed. I have some ideas and see opportunities for collaboration. EdoArad - I would love to talk with you too at that time. For context, I am one of the people involved in READI which is led by EA volunteers and seeking to tackle high impact/EA aligned research questions. This is our current project, you can see other work here.

I'm very interested in the work you are doing at READI, and it would be great to discuss ideas and collaborate. 

(by the way, what does READI stand for?)

Here's a section that was in this post until today but that I've now decided is no longer worth having in the post itself:

What this could become (with your help!)

[Edited to add in April 2021: I've now drafted a post that goes into more detail on a better version of the sorts of ideas given below.]

As noted earlier, I hope this can help some of the many wonderfully curious EAs out there to find important questions they can start plugging away at, to help guide us all in our various efforts to improve the world.

But I’m sure that:

  • I’ve missed various collections of questions, especially for cause areas other than longtermism (my personal focus)
  • New collections will be made in future
  • There are many individual questions that haven’t yet been collected anywhere, or new individual questions that could be suggested (I’ve added some as “Comments” in the google doc already)
  • Some people would find this more useful if someone actually pulled out all of the questions from those collections and organised them, by topic and subtopic and so on (with the original source of each question referenced).
    • This could be in one central document, in a “family” of interlinked documents (e.g., one for each broad cause area), in a spreadsheet, or in a wiki-style page.

And I think we could do more to inspire and support people to actually investigate these questions than just assemble a big list. For example, we could somehow “attach” to each question, perhaps as comments or indented bullet points, things like:

  • thoughts on how to approach the question
  • potential breakdowns into subquestions
  • links to relevant resources
  • links to draft documents where someone has begun answering certain questions
  • “tags” indicating what sort of skills or backgrounds are required for answering each question or set of questions
  • offers of “prizes” (payment) for sufficiently high quality explorations of the questions
    • Ideally, it’d be easy to offer the prizes, stipulate the terms, and see the total amount offered by everyone for a particular question

And this could all be done collaboratively. (Plus, I don’t expect to have time to do it myself.)

So here’s a Google Doc version of this post. [Edit: I'm no longer updating that when I update the post version.] Anyone can comment and make suggestions. Please do so, to make this as useful as it can be! (You can either say “someone should probably do X”, or just do go do X yourself.)

Also feel free to:

  • Duplicate the doc
  • Create other docs and suggest links to them from this central directory
  • Let me know if you want to get full editing permissions and be the person “in charge” of this doc

I’d be really excited to see this develop into something that can really help people advance our movement’s collective knowledge, and to see people actually executing on that - actually making those advancements.

Thoughts from Aaron Gertler

I emailed Aaron Gertler of the Centre for Effective Altruism to ask his thoughts on how valuable something like this would be, and what its ideal eventual form might be. His reply, which he confirmed it was ok for me to quote here, included the following:

I'm not sure how often people actually look at these "open question" lists to decide on research priorities, so I don't know what kind of return you'd get on your time. However, some kind of Google Doc for this should exist, and if your post is what causes that to happen, I think it will be valuable (over time, some number of people will eventually go looking for this sort of thing -- I've been asked for it before, and it will be nice to have a good place to send people).

A really comprehensive list of open questions (which is regularly updated both with new questions and with new resources relevant to old questions) would be an interesting resource, and is the kind of thing one could apply for an EA Funds grant to support; however, I think you'd first have to make a case that such a thing would be used by at least a few people who otherwise wouldn't have picked very good research topics (the Effective Thesis use case is a classic example of this). It seems to me like any such list should be research-oriented (pointing out where work can be done to resolve confusion) more than debate-oriented (pointing out what different people believe), though of course your ability to emphasize that will vary from question to question.

Hopefully that can provide food for thought for people who might want to develop this idea further.

can you explain why you think it's not worth it to include it in the post

Not central to EA, but there's Gwerns open questions.

Additions under "Less technical / AI strategy / AI governance"?

-  https://forum.effectivealtruism.org/posts/WdMnmmqqiP5zCtSfv/cognitive-science-psychology-as-a-neglected-approach-to-ai 
- https://forum.effectivealtruism.org/posts/9kNqYzEAYtvLg2BbR/baobao-zhang-how-social-science-research-can-inform-ai (though this one only has three research questions and isn't focused on generating questions)

Thanks! I've now added the first of those two :)

Got my post up :). https://forum.effectivealtruism.org/posts/dKgWZ8GMNkXfRwjqH/seeking-social-science-students-collaborators-interested-in

Also "Artificial Intelligence and Global Security Initiative Research Agenda - Centre for a New American Security, no date" was published in July 2017, according to the embedded pdf in that link!

Thanks for the heads up - I've now added a link to your doc and changed the date for the CNAS agenda :)

Update: 80,000 Hours have released an article entitled Research questions that could have a big social impact, organised by discipline, which draws on the lists of questions listed by this post, but also includes some new questions (sometimes from personal correspondences with the authors). Readers may want to check that article out too. (I've now added a link from this post.) 

Curated and popular this week
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe