Founders Pledge has recently significantly expanded its research team and is currently considering its research strategy for the next 12 months. This is important as our pledge value is ~$2bn and counting. I would welcome suggestions on which topics could be potentially promising for us to research going forward. These suggestions could be promising according to various different ethical and empirical premises, catering to:

  • Donors solely focused on high-income country problems.
  • Donors focused on animal welfare.
  • Donors focused on the long-term future of sentient life.
  • Donors focused on GCRs and existential risk.
  • Donors focused on improving the welfare of the current generation of humans.
  • Donors interested in impact investing/social entrepreneurship.

Topics we are currently considering include:

  • Climate change/clean energy innovation
  • Improving science
  • Sundry ex risks/GCRs
  • Increasing economic growth
  • Animal product alternatives
  • Improving political institutions and political wisdom
  • Reducing political bias and partisanship
  • Pain relief in poor countries
  • etc

Thoughts on these topics and suggestions for any others would be appreciated. Meta-thoughts on how to approach this selection task would also be handy.

Cheers!


52

0
0

Reactions

0
0
Comments31
Sorted by Click to highlight new comments since: Today at 1:52 PM

[My views only]

Thanks for putting up with my follow-up questions.

Out of the areas you mention, I'd be very interested in:

  • Improving science. Things like academia.edu and sci-hub have been interesting. Replacing LaTeX is interesting. Working on publishing incentives is also interesting. In general, there seems to be plenty of room for improvement!

I'd be interested in:

  • Improving political institutions and political wisdom: EA might need to escalate its involvement in many areas adjacent to this, such as policy intersected with great power relations or pivotal technologies. It would be very interesting to better-understand what can be done with funding alone.
  • Reducing political bias and partisanship: this seems hard, but somewhat important. Most lobbyists are not trying to do this. Russia is actively trying to do the opposite. It would be interesting if more can be done in this space. Fact-checking websites and investigative journalism (Bellingcat) are interesting in this space too. Another interesting area is counteracting political corruption.
  • Sundry ex risks/GCRs

I'd be a little interested in:

  • Increasing economic growth

I think the other might be disadvantageous based on my understanding that it's better for EA to train people up in longtermist-relevant areas, and be percieved as being focused on the same.

Out of those you haven't mentioned, but that seem similar, I'd also be interested in:

  • Promotion of effective altruism
  • Scholarships for people working on high-impact research
  • More on AI safety - OpenPhil seems to be funding high-prestige mostly-aligned figures (e.g. Stuart Russell, OpenAI) and high-prestige unaligned figures (e.g. their fellows) but has mostly not funded low-mid prestige highly-aligned figures (with notable exceptions of MIRI, Michael C and Dima K). Other small but comparably informed funders mostly favor low-mid prestige highly-aligned targets to a greater extent e.g. Paul's funding for AI safety research, and Paul and Carl argued to OpenPhil that they should fund MIRI more. I think there are residual opportunities to fund other low-mid prestige highly-aligned figures. [edited for clarity]

+1 to doing something with Sci-Hub.

Sci-Hub has had a huge positive impact. Finding ways to support it / make it more legal / defend it from rent-seeking academic publishers would be great.

[anonymous]5y11
0
0

Thanks a lot for this Ryan. Re promoting science, what do you make of the worry that the long-term sign of the effect of improving science is unclear because it doesn't produce differential technological development and instead broadly increases the increase of all knowledge, including potentially harmful knowledge?

I think it's a reasonable concern, especially for AI and bio, and I guess that is part of what a grantmaker might investigate. Any such negative effect could be offset by: (1) associating scientific quality with EA/ recruiting competent scientists into EA, (2) improving the quality of risk-reducing research, and (3) improving commentary/reflection on science (which could help with identifying risky research). My instinct is that (1-3) are greater than risk-increasing effects, at least for many projects in this space and that most relevant experts would think so, but it would be worth asking around.

various people's pressure on OpenPhil to fund MIRI

I'm curious what this is referring to. Are there specific instances of such pressure being applied on Open Phil that you could point to?

Not sure if this counts, but I did make a critique that Open Phil seemed to have evaluated MIRI in a biased way relative to OpenAI.

I don't have any inside info, and perhaps "pressure" is too strong, but Holden reported recieving advice in that direction in 2016:

"Paul Christiano and Carl Shulman–a couple of individuals I place great trust in (on this topic)–have argued to me that Open Phil’s grant to MIRI should have been larger. (Note that these individuals have some connections to MIRI and are not wholly impartial.) Some other people I significantly trust on this topic are very non-enthusiastic about MIRI’s work, but having a couple of people making the argument in favor carries substantial weight with me from a “let many flowers bloom”/”cover your bases” perspective. (However, I expect that the non-enthusiastic people will be less publicly vocal, which I think is worth keeping in mind in this context.)"

Thanks for asking this question. I support and follow the approach of asking relevant people in the space for input to a research agenda. I am happy to see that other organizations are also doing it.

Meta-thoughts on how to approach this selection task would also be handy.

Your question inspired me to write a short post on a methodology of systematically integrating stakeholders' and decision-makers' input into the research agenda. You might find this meta-methodology helpful.

Out of the areas you mention, I'd be very interested in the following:
Animal product alternatives 6/10

Pain relief in developing countries 6/10
Improving science 9/10

Ideas not included on your list:
GiveWell recently published its list of areas they are planning to explore. I think some of them might be of interest to donors focused on improving the welfare of the current generation of humans and high-income countries’ problems.

  • Tobacco, alcohol, and sugar control
  • Air pollution regulation
  • Micronutrient fortification and biofortification
  • Improving government program selection
  • Improving government implementation
  • Immigration reform
  • Mosquito gene drives advocacy and research
  • Mental health (interventions comparison)
  • Sleep quality improvement

As you know, GW’s research is very diligent. Consequently, it takes a long time to finalize. I would be interested in having preliminary research conducted by other organizations.

Regarding donors focused on animal welfare:

  • Producers’ outreach, for example,. providing subsidization for farmers interested in higher-welfare farming
  • CRISPR-based gene drives to address wild animals’ suffering
  • WAS intervention comparison
  • Affecting law and law enforcement focused on welfare improvements for chicken and fish in Asia
  • Insects’ welfare, intervention comparison, for example, reduction of the production of silk, painkillers for insects used in research, etc.

I am currently working on CE’s agenda for the next year in the area of global poverty/health, animal advocacy, and mental health. I will be able to list more areas and research questions worth investigating that CE cannot cover this year at the end of September. I am narrowing down a list of research ideas from 400 ideas (in three cases). Let me know if you are interested in hearing more about it.

cognitive enhancement research


Here are a few different areas that look promising. Some of these are taken from other organizations’ lists of promising areas, but I expect more research on each of them to be high expected value.

  • Donors solely focused on high-income country problems.
    • Mental health research (that could help both high and low income countries).
    • Alcohol control
    • Sugar control
    • Salt control
    • Trans-fats control
    • Air pollution regulation
    • Metascience
    • Medical research
    • Lifestyle changes including "nudges" (e.g. more exercise, shorter commutes, behaviour, education)
    • Mindfulness education
    • Sleep quality improvement
  • Donors focused on animal welfare.
    • Wild animal suffering (non-meta, non-habitat destruction) interventions
    • Animal governmental policy, particularly in locations outside of the USA and EU.
    • Treat disease that affects wild animals
    • Banning live bait fish
    • Transport and slaughter of turkeys
    • Pre-hatch sexing
    • Brexit related preservation of animal policy
  • Donors focused on improving the welfare of the current generation of humans.
    • Pain relief in poor countries
    • Contraceptives
    • Tobacco control
    • Lead paint regulation
    • Road traffic safety
    • Micronutrient fortification and biofortification
    • Sleep quality improvement
    • Immigration reform
    • Mosquito gene drives, advocacy, and research
    • Voluntary male circumcision
    • Research to increase crop yields

I'd need a better understanding of how Founders Pledge works to be able to say anything intelligent. I'm guessing the idea is something like:

  • when founders are due to donate, you prompt them
  • you ask them what kind of advice they would like
  • you give them some research relevant to that, and do/don't make specific recommendations ???
  • they make donations directly

Is that how it actually happens?

[anonymous]5y8
0
0

yes it's something like that, except that we do make specific recommendations, which are suited to their core values, and that they typically make donations via our donor advised fund rather than directly.

Cool! Are you able to indicate roughly what order of magnitude of donations you would expect to contribute per-year, over the next few years in the promising areas (or any of the others if they're significantly bigger than those) such as:

Donors focused on the long-term future of sentient life.
Donors focused on GCRs and existential risk.
Improving science
Sundry ex risks/GCRs
Improving political institutions and political wisdom

?

[anonymous]5y11
0
0

I would expect it to be in the millions/yr, though I don't think I should throw about specific figures on the forum.

No problem. I've also had a skim of the x-risk report to get an idea of what research you're talking about.

Would you expect the donors to be much more interested in some of the areas you mention than others, or similarly interested in all the areas?

[anonymous]5y9
0
0

I think we will be able to convince enough of them to donate to high-impact areas regardless of what they are

I'd love to see an independent dive into consciousness & moral patienthood.

Luke Muehlhauser did a thorough report (a) on this a couple years ago. As far as I know, that work is informing a lot of EA prioritization. It's quite opinionated, and I haven't seen too much discussion of its conclusions (there's some in the AMA; the topic definitely warrants more).

Consciousness and its relationship to morality is complicated enough & important enough that an independent pass seems high value.

Potential entry point: Integrated Information Theory is currently pretty prominent in neuroscience; I'd love to see an EA steelman of it. (Luke on IIT, after giving a brief explainer: "let me jump straight to my reservations about IIT.")

Also would be great to see an EA steelman of panpsychism, which is considered plausible by a bunch of philosophers and some scientists.

Have you seen Rethink Priorities work on this? https://www.rethinkpriorities.org/invertebrate-sentience-table

While the purpose was to investigate invertebrate sentience, they also covered different species of vertebrates, plants and single-celled organisms for comparison.

I guess I'm desiring more of a common vocabulary here, maybe something like "here are some open questions about consciousness that are cruxy, here's where [our organization] ended up on each of those questions, here are some things that could change our mind."

Luke did a good job of this in his report. From a quick look at Rethink Priorities' consciousness stuff, I'm not sure what they concluded about the important open questions. (e.g. Where do they land on IIT? Where do they land on panpsychism? What premises would I have to hold to agree with their conclusions?)

I should probably only speak for myself and not the entire team, but I think the breakdown is something like:

Where do they land on IIT?

Quite skeptical / lean against

~

Where do they land on panpsychism?

Quite skeptical / lean against

~

What premises would I have to hold to agree with their conclusions?

The key assumptions are:

(1) epiphenomenalism (in the traditional sense) is false

(2) methodological naturalism

(3) "inference to the best explanation" is a worthwhile method in this case

~

here are some open questions about consciousness that are cruxy, here's where [our organization] ended up on each of those questions, here are some things that could change our mind

We largely chose not to do this because we mostly just agree with what Luke wrote and didn't think we would be able to meaningfully improve upon it.

Thanks!


We largely chose not to do this because we mostly just agree with what Luke wrote and didn't think we would be able to meaningfully improve upon it.

fwiw I found your comment really helpful & I think the RP content would benefit from including a sketch like this.

Thanks for highlighting; I had only thought a little about RP's work on consciousness. I'll take a closer look. (This essay seems especially relevant.)

Yeah, I'd recommend reading that essay, the feature reports, and also the cause profile.

Got it, thanks!

I ended up looking at some theories of consciousness and wrote Physical theories of consciousness reduce to panpsychism. Brian Tomasik has also of course written plenty about panpsychism, and I reference some of his writing.

Thank you for doing this! I was excited to see your piece, and have been thinking about it.

Scott Aaronson and Giulio Tononi (the main advocate of IIT) and others had an interesting exchange on IIT which goes into the details more than Muehlhauser's report does. (Some of it is cited and discussed in the footnotes of Muehlhauser's report, so you may well be aware of it already.) Here, here and here.

There are many organizations doing research work on different projects, such as GiveWell, OPP, CE, ACE, 80k etc... Why not stand on their shoulders? Instead of doing more research? Or fund researchers specially to work in these organizations (as they already have the way of work sorted)?

Just wanted to mention that I also think that improving political institutions and wisdom (and general capacity building) is quite interesting. I think policy in general is a semi-neglected EA area that could be highly valuable. Everything from advocating for known high impact policies to be put in place where they aren't (ex. tobacco taxation) to examining new policies that could be implemented (ex. novel ways of stopping illicit financial outflows from developing countries). I think GiveWell has also been looking into this field so I'm sure they have some thoughts here. I've been researching tobacco tax policy mainly in LMICs (and tobacco policies more broadly as a byproduct of that research) and am happy to chat about that if it's helpful, but I'm a relative novice in the field.

Mental health (especially in developing countries --> eg a more thorough look at Strong Minds etc.).

Fighting human rights violations around the globe.

Curated and popular this week
Relevant opportunities