Founders Pledge has recently significantly expanded its research team and is currently considering its research strategy for the next 12 months. This is important as our pledge value is ~$2bn and counting. I would welcome suggestions on which topics could be potentially promising for us to research going forward. These suggestions could be promising according to various different ethical and empirical premises, catering to:
- Donors solely focused on high-income country problems.
- Donors focused on animal welfare.
- Donors focused on the long-term future of sentient life.
- Donors focused on GCRs and existential risk.
- Donors focused on improving the welfare of the current generation of humans.
- Donors interested in impact investing/social entrepreneurship.
Topics we are currently considering include:
- Climate change/clean energy innovation
- Improving science
- Sundry ex risks/GCRs
- Increasing economic growth
- Animal product alternatives
- Improving political institutions and political wisdom
- Reducing political bias and partisanship
- Pain relief in poor countries
Thoughts on these topics and suggestions for any others would be appreciated. Meta-thoughts on how to approach this selection task would also be handy.
Thanks for asking this question. I support and follow the approach of asking relevant people in the space for input to a research agenda. I am happy to see that other organizations are also doing it.
Your question inspired me to write a short post on a methodology of systematically integrating stakeholders' and decision-makers' input into the research agenda. You might find this meta-methodology helpful.
Out of the areas you mention, I'd be very interested in the following:
Animal product alternatives 6/10
Pain relief in developing countries 6/10
Improving science 9/10
Ideas not included on your list:
GiveWell recently published its list of areas they are planning to explore. I think some of them might be of interest to donors focused on improving the welfare of the current generation of humans and high-income countries’ problems.
As you know, GW’s research is very diligent. Consequently, it takes a long time to finalize. I would be interested in having preliminary research conducted by other organizations.
Regarding donors focused on animal welfare:
I am currently working on CE’s agenda for the next year in the area of global poverty/health, animal advocacy, and mental health. I will be able to list more areas and research questions worth investigating that CE cannot cover this year at the end of September. I am narrowing down a list of research ideas from 400 ideas (in three cases). Let me know if you are interested in hearing more about it.
cognitive enhancement research
I'd love to see an independent dive into consciousness & moral patienthood.
Luke Muehlhauser did a thorough report (a) on this a couple years ago. As far as I know, that work is informing a lot of EA prioritization. It's quite opinionated, and I haven't seen too much discussion of its conclusions (there's some in the AMA; the topic definitely warrants more).
Consciousness and its relationship to morality is complicated enough & important enough that an independent pass seems high value.
Potential entry point: Integrated Information Theory is currently pretty prominent in neuroscience; I'd love to see an EA steelman of it. (Luke on IIT, after giving a brief explainer: "let me jump straight to my reservations about IIT.")
Also would be great to see an EA steelman of panpsychism, which is considered plausible by a bunch of philosophers and some scientists.
Have you seen Rethink Priorities work on this? https://www.rethinkpriorities.org/invertebrate-sentience-table
While the purpose was to investigate invertebrate sentience, they also covered different species of vertebrates, plants and single-celled organisms for comparison.
I guess I'm desiring more of a common vocabulary here, maybe something like "here are some open questions about consciousness that are cruxy, here's where [our organization] ended up on each of those questions, here are some things that could change our mind."
Luke did a good job of this in his report. From a quick look at Rethink Priorities' consciousness stuff, I'm not sure what they concluded about the important open questions. (e.g. Where do they land on IIT? Where do they land on panpsychism? What premises would I have to hold to agree with their conclusions?)
I should probably only speak for myself and not the entire team, but I think the breakdown is something like:
Quite skeptical / lean against
Quite skeptical / lean against
The key assumptions are:
(1) epiphenomenalism (in the traditional sense) is false
(2) methodological naturalism
(3) "inference to the best explanation" is a worthwhile method in this case
We largely chose not to do this because we mostly just agree with what Luke wrote and didn't think we would be able to meaningfully improve upon it.
fwiw I found your comment really helpful & I think the RP content would benefit from including a sketch like this.
Thanks for highlighting; I had only thought a little about RP's work on consciousness. I'll take a closer look. (This essay seems especially relevant.)
Yeah, I'd recommend reading that essay, the feature reports, and also the cause profile.
Got it, thanks!
I ended up looking at some theories of consciousness and wrote Physical theories of consciousness reduce to panpsychism. Brian Tomasik has also of course written plenty about panpsychism, and I reference some of his writing.
Thank you for doing this! I was excited to see your piece, and have been thinking about it.
Scott Aaronson and Giulio Tononi (the main advocate of IIT) and others had an interesting exchange on IIT which goes into the details more than Muehlhauser's report does. (Some of it is cited and discussed in the footnotes of Muehlhauser's report, so you may well be aware of it already.) Here, here and here.
Here are a few different areas that look promising. Some of these are taken from other organizations’ lists of promising areas, but I expect more research on each of them to be high expected value.
Just wanted to mention that I also think that improving political institutions and wisdom (and general capacity building) is quite interesting. I think policy in general is a semi-neglected EA area that could be highly valuable. Everything from advocating for known high impact policies to be put in place where they aren't (ex. tobacco taxation) to examining new policies that could be implemented (ex. novel ways of stopping illicit financial outflows from developing countries). I think GiveWell has also been looking into this field so I'm sure they have some thoughts here. I've been researching tobacco tax policy mainly in LMICs (and tobacco policies more broadly as a byproduct of that research) and am happy to chat about that if it's helpful, but I'm a relative novice in the field.
There are many organizations doing research work on different projects, such as GiveWell, OPP, CE, ACE, 80k etc... Why not stand on their shoulders? Instead of doing more research? Or fund researchers specially to work in these organizations (as they already have the way of work sorted)?
I'd need a better understanding of how Founders Pledge works to be able to say anything intelligent. I'm guessing the idea is something like:
Is that how it actually happens?
yes it's something like that, except that we do make specific recommendations, which are suited to their core values, and that they typically make donations via our donor advised fund rather than directly.
Cool! Are you able to indicate roughly what order of magnitude of donations you would expect to contribute per-year, over the next few years in the promising areas (or any of the others if they're significantly bigger than those) such as:
Donors focused on the long-term future of sentient life.
Donors focused on GCRs and existential risk.
Sundry ex risks/GCRs
Improving political institutions and political wisdom
I would expect it to be in the millions/yr, though I don't think I should throw about specific figures on the forum.
No problem. I've also had a skim of the x-risk report to get an idea of what research you're talking about.
Would you expect the donors to be much more interested in some of the areas you mention than others, or similarly interested in all the areas?
I think we will be able to convince enough of them to donate to high-impact areas regardless of what they are
[My views only]
Thanks for putting up with my follow-up questions.
Out of the areas you mention, I'd be very interested in:
I'd be interested in:
I'd be a little interested in:
I think the other might be disadvantageous based on my understanding that it's better for EA to train people up in longtermist-relevant areas, and be percieved as being focused on the same.
Out of those you haven't mentioned, but that seem similar, I'd also be interested in:
I'm curious what this is referring to. Are there specific instances of such pressure being applied on Open Phil that you could point to?
I don't have any inside info, and perhaps "pressure" is too strong, but Holden reported recieving advice in that direction in 2016:
Not sure if this counts, but I did make a critique that Open Phil seemed to have evaluated MIRI in a biased way relative to OpenAI.
Thanks a lot for this Ryan. Re promoting science, what do you make of the worry that the long-term sign of the effect of improving science is unclear because it doesn't produce differential technological development and instead broadly increases the increase of all knowledge, including potentially harmful knowledge?
I think it's a reasonable concern, especially for AI and bio, and I guess that is part of what a grantmaker might investigate. Any such negative effect could be offset by: (1) associating scientific quality with EA/ recruiting competent scientists into EA, (2) improving the quality of risk-reducing research, and (3) improving commentary/reflection on science (which could help with identifying risky research). My instinct is that (1-3) are greater than risk-increasing effects, at least for many projects in this space and that most relevant experts would think so, but it would be worth asking around.
+1 to doing something with Sci-Hub.
Sci-Hub has had a huge positive impact. Finding ways to support it / make it more legal / defend it from rent-seeking academic publishers would be great.
Mental health (especially in developing countries --> eg a more thorough look at Strong Minds etc.).
Fighting human rights violations around the globe.