Hide table of contents

At Open Philanthropy, we aim to give as effectively as we can. To find the best opportunities, we’ve looked at many different causes, some of which have become our current focus areas.

Even after a decade of research, we think there are many excellent grantmaking ideas we haven’t yet uncovered. So we’ve launched the Cause Exploration Prizes around a set of questions that will help us explore new areas.

We’re most interested in responses to our open prompt: “What new cause area should Open Philanthropy consider funding?”

We also have prompts in the following areas:

We’re looking for responses of up to 5,000 words that clearly convey your findings. It’s fine to use bullet points and informal language. For more detail, see our guidance for authors. To submit, go to this page.

We hope that the Prizes help us to:

  • Identify new cause areas and funding strategies.
  • Develop our thinking on how best to measure impact.
  • Find people who might be a good fit to work with us in the future.

You can read more about the Cause Exploration Prizes on our dedicated website. You’ll also be able to read all of the submissions on the Effective Altruism Forum later this summer – stay tuned!

Prizes, rules, and deadlines

All work must be submitted by 11:00 pm PDT on August 11, 2022 (deadline extended on July 28) August 4, 2022.

You are almost certainly eligible. We think these questions can be approached from many directions; you don’t need to be an expert or have a PhD to apply.

There’s a $25,000 prize for the top submission, and three $15,000 prizes. Anyone who wins one of these prizes will be invited to present their work to Open Phil’s cause prioritization team in San Francisco (with compensation for time and travel). And we will follow up with authors if their work contributes to our grantmaking decisions!

We will also award twenty honorable mentions ($500), and a participation award ($200) for the first 200 submissions made in good faith and not awarded another prize.

All submissions will be shared on the Forum to allow others to learn from them. If participants prefer, their submission can be published anonymously, and we can handle the logistics of posting to the Forum. See more detail here.

For full eligibility requirements and prize details, see our rules and FAQs.

If you have any questions not answered by that page, contact us at hello@causeexplorationprizes.com.

View submissions

You can use the Cause Exploration Prizes tag to see published submissions.

Comments22


Sorted by Click to highlight new comments since:

Is there a link to what OpenPhil considers their existing cause areas? The Open Prompt asks for new cause areas so things that you already fund or intend to fund are presumably ineligible, but while the Cause Exploration Prize page gives some examples it doesn't link to a clear list of what all of these are. In a few minutes looking around the Openphilanthropy.org site the lists I could find were either much more general than you're looking for here (lists of thematic areas like "Science for Global Health") or more specific (lists of individual grants awarded) but I may be missing something.

This is a good question, and it's certainly something that could be clearer on the website. The closest thing to what you're asking for is here but the page is slightly dated and is due to be refreshed soon. Some of the focus areas are also at a very high level of abstraction (e.g. global health and development) which should not be read as meaning we don't want suggestions for opportunities within those focus areas.

On the page for the new cause area prompt it specifies deliberately that we are open to suggestions for new problems to work on, and new ways to address problems we are already working on. So to pick an example, Open Philanthropy already funds work to fight malaria through its funding of GiveWell recommended charities that do service delivery work (e.g. AMF, Malaria Consortium) and through supporting research into gene drives (e.g. Target Malaria). But there are potentially other ways to fight malaria that we haven't funded historically (e.g. vaccine development). 

I would suggest authors do a quick check through our grant database before digging deep into a particular cause, and if there is a specific problem that you are considering writing about and are concerned might be too similar to what we already do, you're welcome to email hello@causeexplorationprizes.com

To clarify, are you also interested in proposals concerning animal welfare?

Yes - this fits within our GHW portfolio. From the FAQ page:

Can I write about non-human animals?

Yes. Open Philanthropy is a major funder of work to improve farm animal welfare. If you want to write about a potential new cause area where the primary beneficiaries are non-human animals, please use the open prompt.

Should no overlap with FTX's project ideas competition be assumed? (Links: Announcement, Awards)

However, we are trying to reward the creation and sharing of new work, so you may not submit work that has previously been published or posted publicly on the internet (e.g., on a blog, preprint server, or academic article). (source)

I posted my FTX application here, but it was for a specific project (launching a replication institute). I could write up a broader proposal for scientific reproducibility as a cause area. Would that be allowed?

Yes, a broader proposal on scientific reproducibility as a potential cause area would be appropriate for this. Your proposed project could be an example grantee, but it would be great idea to explore other ways that a funder could help address the problem as well (even if you conclude that something like I4R is the most cost-effective opportunity)

Are there any limitations on the kinds of feedback we can/should get before submitting? For example, is it okay to:
- Get feedback from an OpenPhil staff member?
- Publish on the forum, get feedback, and make edits before submitting a final draft?
- Submit an unpublished piece of writing which has previously been reviewed?

If so, should reviewers be listed in order to provide clarity on input? Or omitted to avoid the impression of an "endorsement"?

All of those things are ok. Open Phil staff shouldn't be listed as co-authors since they are not eligible for the prizes. A brief acknowledgement section is welcome if you've had substantial input from others who are not co-authors. 

If you are submitting an unpublished piece of writing which you've already produced, please make sure it is answering a question that we've put forward and is geared towards the perspective of a funder (see our guidance page for more detail)

So, are we going to talk about the awesome aesthetic design, like the five dimensional meta flower?

Can you write about the design choices and who did this work? 

Is Aaron Gertler secretly this generation’s Beethoven? 

Is Open Phil harboring an immense pool of artistic talent and will EA see a renaissance in design?

 

The flower was licensed from this site.

The designer saw and appreciated this comment, but asked not to be named on the Forum.

Meta comment [1]- Ok, well. I downvoted my own comment above, because it rose to the top, above a comment that I think is more substantive. 

I don't have much more upvoting/downvoting power, so please stop upvoting my comment, or at least upvote the aaronb50 comment[2].

  1. ^

    Yes ,this comment you are reading is insipid, but it's still better than wobbly arguing over neartermism/longtermism, instead of doing the thing.

  2. ^

    Probably upvotes should be by merit, not by relative position, so the ideas in this comment might be wrong in general. 

Sorry to hear! I think you might have clicked it during the split-second I was updating that page. Please could you give it another try and send hello@causeexplorationprizes.com a screenshot of whatever error you're getting if it doesn't work

It's working now, thanks.

The link to the Health prompt page on the website leads to the Development prompt page

Thanks for spotting - this should now be fixed

P
1
0
0

You say you want us to estimate the cost-effectiveness of our proposals. Does this mean that suggestions like scientific research or lobbying where it’s very hard to estimate costs and probability of success are invalid? How would you respond if someone submitted a proposal for, say, funding anti-aging research? (this is not what I plan to submit btw) Would it have the potential of winning a prize?

And what about providing funding for for-profit companies that could have a positive impact?

Suggestions of scientific research and lobbying / advocacy, or other activities where cost-effectiveness are hard to measure are all potentially valid suggestions and would be eligible for prizes (and the $200 participation awards). For each of these I'd say that costs are relatively estimable based on what individual research projects costs, current research spending in an area, the cost of comparable advocacy campaigns etc. I agree that the chances of success are more difficult, but they can be estimated to at least some extent based on comparable base rates. There will, of course, be substantial uncertainty associated with any estimates of cost-effectiveness that relies on research or advocacy, but as long as your reasoning is transparent that's ok. You can read more about this on our webpage on making a grant and on the guidance page for the Cause Exploration Prizes. 

Anti-aging research could be an interesting submission.

Investments in for-profit companies are eligible as suggestions - Open Philanthropy is a flexible funder. When thinking about the costs of such an investment program, you will want to reduce the costs by any returns that the investment generates (perhaps with a discount to reflect the opportunity cost of investing it elsewhere).

Are we allowed to submit multiple candidate cause areas for evaluation? If so, would that be as separate proposals?

P
6
0
0

From their FAQs:

Can I submit multiple entries?

Yes. You can submit up to 3 entries as an author or co-author.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr