Hide table of contents

Introduction

As Chief Strategy Officer at Rethink Priorities, I am sometimes asked to share information with those interested in supporting or collaborating with us, or just wanting to learn more about us, so that they can understand the nature of our work, our motivations, and some of our past projects. This post intends to provide an accessible high-level overview, including some key uncertainties and considerations that help drive our approach, along with some select examples of our work, which overall may serve as a useful reference. 

Fundamentally, Rethink Priorities (RP) works to do good at scale, but we’re not committed to advancing any particular type of good or limited to a specific approach.

In this piece, we will briefly lay out some of the uncertainties people face when trying to do good, which are some of the motivating factors for our approach. Key uncertainties include (a) limited initial evidence, (b) challenges in measuring progress, and (c) difficulty determining the "right" goal in the first place. Next, we highlight several key considerations for our work,  including skepticism, transparency, collaborating with key decision-makers to offer practical guidance, and supporting the development of emerging fields. In doing all that, we will also highlight several examples of projects and some initial results:

  1. Our Worldview Investigations Team has developed frameworks (such as the Cross-Cause Effectiveness Model and the Moral Parliament Tool) to help donors align their giving with their values and better acknowledge the deep uncertainties involved in such decisions.
  2. Our Global Health and Development Department’s modeling and reviews (e.g., on lead exposure) have helped to redirect millions of dollars of grants to more cost-effective options.
  3. Our Moral Weight Project wrestled with the difficult question of how to compare welfare across different species—a task that is arguably as philosophically challenging as it is practically important.
  4. Our Animal Welfare Department has sought to understand the capacity for sentience among invertebrates, such as insects and shrimps, and promote their wellbeing. As these were nascent (sub)fields, we have had the opportunity to make meaningful contributions.

Note: This piece is trying to be an accessible overview, and as a result, it's not fully comprehensive. For instance, it largely does not cover all of our work, which spans a variety of additional areas and approaches, including surveys and data analysis, global catastrophic risks, and fiscally sponsoring promising initiatives.

On the uncertainties

One of the things that motivates me daily to work at RP is the sheer amount of uncertainty around the best way of doing good. RP exists to help address these uncertainties, but we don’t claim to have all the answers. There are still many, many unanswered questions: How much should we invest in solving a particular problem? Which approach should we adopt? How can we be sure we're actually making a positive difference?  

When your aim is making money, it’s relatively straightforward to check if you’re succeeding. Doing good isn’t like that. What might for-profit businesses look like if we didn’t know what it meant to make money and there were no direct metrics like the size of your bank account or share price? This scenario presents a rough approximation of the reality that we in the non-profit world face. It is extremely difficult to decide how to measure progress on goals like improving wellbeing, satisfying preferences, or ensuring justice. 

Even beyond the challenge of gathering evidence or assigning probabilities to claims of effectiveness, it’s difficult to know whether we’re pursuing the "right" goal in the first place—or whether we should focus on just one goal or many. 

All of this fundamental uncertainty is a double-edged sword: it is daunting that there are so many unknowns, and yet, it is highly motivating that there are so many important questions to investigate and so much work still to be done. At RP, we are constantly refining our approach. We are not married to particular approaches or methods. Put simply, our mission is to do good (ideally large amounts), but not some specific kind of good. 

Our approach applied: Some examples of impact

How RP embraces uncertainty

We’re skeptics, open to revising our methods as we learn more. Our focus is not just on deepening our own understanding, but on helping others to navigate their uncertainties. We aim to emulate GiveWell’s high levels of skepticism and transparency toward investigating global health and development interventions, and apply that gold standard across causes areas.

To this end, our Worldview Investigations Team developed free and publicly available tools that rigorously quantify the value of different courses of action, taking into account multiple decision theories. These cause prioritization tools help philanthropists to model uncertainty and better visualize how different assumptions, moral views, and decision-making procedures might affect their choices.    

Example 1: Aligning donors’ decisions with their values while accounting for moral uncertainty 

The Moral Parliament Tool models different worldviews by using delegates to represent a set of normative commitments, including first-order moral theories, values, and attitudes toward risk. It works by allowing users to:

  1. Input their confidence in various worldviews.
  2. Explore methods for reaching decisions about charitable giving. 

The tool models ways that you could consider resource allocation decisions in light of normative uncertainty. It shows the impact of different philosophies and decision-making approaches in philanthropy (see the below image for some reference).

Our team is now looking to further develop this tool and their Portfolio Builder tool for use with a broader set of donors. 

How RP works with decision-makers 

RP also provides value by turning high-quality research on important issues into practical guidance, which we work to share with key stakeholders. Our stakeholders include non-profit organizations, government agencies, policymakers, fellow research institutes, non-profit entrepreneurs, foundations, individual philanthropists, and other decision-makers. Wherever possible, we seek stakeholders’ input on our project planning, seek their feedback on research, and continue engagement with them beyond the publication of the work. 

Below we will highlight two examples: one from our work on global health and development and the other from our worldview investigations.

Example 2: Partnerships and outcomes of our global health and development work

Traditionally, foundations in the global health and development space fund projects based on preferences toward a particular region or type of intervention (they could be described as local optimizers). In contrast, we believe in allocating resources based on cost-effectiveness (which could be described as acting as global optimizers). GiveWell and Open Philanthropy are examples of funders who work as global optimizers. While most of our Global Health and Development Department’s work comes from commissioned projects from such funders, we also work with our networks to broaden the impact of this research and influence other actors.     

Recent outcomes from our partnership-building efforts include:

  • RP’s investigations into health risks from lead exposure influenced an $8M grant toward an intervention that the team believes is a highly cost-effective way to prevent and reduce the effects of lead exposure.
  • A member of our network shared our research findings with an individual philanthropist who subsequently redirected tens of millions of dollars in funding to a more impactful field.
  • We had the opportunity to present RP’s investigation into the value of research in influencing actors at different levels of cost-effectiveness at a private event. A major foundation that advises individual donors requested additional information, and shared that this presentation sparked internal discussions about their advising and a nine-figure grantmaking fund.
Read our full impact update from the Global Health and Development department

Example 3: Broadening moral circles

Decision-makers interested in animal welfare face difficult decisions when allocating their resources. Determining the degree to focus on different species entails making judgments about the overall quality of life that different species can experience (i.e. their capacity for welfare). Historically, animal welfare donors have had to rely on theoretical philosophical or scientific considerations or even their gut instincts, with limited practical guidance on how to approach cross-species grant decisions. 

To address this issue, we conducted a rigorous investigation that resulted in a model that foundations—or even governments—can directly apply when making decisions.

This Moral Weight Project culminated in a welfare range table and a series of influential research posts. Oxford University Press will also publish an upcoming book on the research edited by RP’s Bob Fischer. The Moral Weight Project’s findings have generated significant discussion shared in academic and non-profit circles. For example, Animal Charity Evaluators are now integrating elements of the work into their evaluation criteria. We are also in active conversations with stakeholders in governmental bodies in the US and in the Netherlands about how they can incorporate this model into their work. 

Read more about our moral weight work, our tools and surveys

How RP supports field building 

One of the ways in which RP creates impact is by helping to develop fields or subfields for causes that seem pressing, but have historically been overlooked. This work may entail: searching for niches where new research could lead to large-scale impact, conducting initial research, rallying partners, incubating or supporting new projects in a field, and building the talent pipeline by, for example, offering fellowships.

One example of this type of work is our early research on the sentience of invertebrate animals, which led us to advance the subfields of insect welfare and shrimp welfare

Example 4: Escalating the importance of shrimp welfare within the animal advocacy field

After investigating the evidence that some invertebrates may be sentient, our Animal Welfare Department identified a critical gap in knowledge regarding their welfare. We conducted extensive research to better understand the scale of the issue and found that, at the time of the research, shrimp production affected more individuals alive than insect farming, fish captures, or the farming of any other vertebrates for human consumption. The team also investigated the major welfare threats these animals face. Their findings continue to help bring much-needed clarity, enabling advocates and grantmakers to prioritize welfare issues and tackle the primary sources of suffering for farmed shrimps.

RP’s research opened up new impact pathways, influencing work on policy change, corporate commitments, and strategic shifts within the animal welfare community, alongside legitimizing shrimp welfare as an important concern. See the below graphic for more about how our shrimp welfare work helped contribute to impact in policy, legislation, non-profit entrepreneurship, and industry over time.

Read more about our shrimp welfare work, as well as our efforts to advance farmed animal welfare policy across the EU here.

Read more about our impact for animals

Reflections and looking forward

Through our diverse body of work in different areas, we seek to decrease the uncertainties that people face when trying to improve the world. Moreover, we work to catalyze action on outstanding opportunities by collaborating with decision-makers to help them be more effective and even develop new subfields. In this sense, Rethink Priorities is a think-and-do tank. 

Reflecting on our work, we think that many of our self-generated project ideas—some of which we financed using RP’s unrestricted funding —have been our most innovative or important work resulting in some of the greatest impact over the years. Key examples of self-funded work include invertebrate sentiencemoral weights and welfare rangesthe cross-cause model, and the Causes and Uncertainty: Rethinking Value in Expectation (CURVE) sequence. We have learned many lessons from this work, and remain open to ways in which we can improve. Overall, we believe that RP is currently in a position to keep exploring new avenues for impact as long as we have flexible support to seize opportunities.

We invite interested readers to learn more about RP via our research database and to stay updated on new work by subscribing to our newsletter. Also, please feel free to email us any of your questions or feedback! 

Acknowledgments

Rethink Priorities is a think-and-do tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. This post is authored by Kieran Greig. Thanks to Marcus A. Davis, Daniela Waldhorn, John Firth, and David Moss, Janique Behman, Hannah Tookey, Whitney Childs, and Henri Thunberg for having made significant contributions leading up to this text. Sherry Yang is to be credited for graphic design. Special thanks go to Rachel Norman for substantial editing. 

Comments1


Sorted by Click to highlight new comments since:

Executive summary: Rethink Priorities aims to do good at scale by addressing key uncertainties in effective altruism through research, tools, and collaboration with decision-makers across various cause areas.

Key points:

  1. RP tackles uncertainties in doing good, including limited evidence, measurement challenges, and goal ambiguity.
  2. The organization develops frameworks and tools to help donors align giving with values and navigate moral uncertainty.
  3. RP's research has influenced millions in charitable giving and policy decisions in global health and development.
  4. The Moral Weight Project provides guidance on comparing welfare across species for animal advocacy.
  5. RP supports field-building in neglected areas, such as invertebrate welfare research.
  6. Self-generated projects using unrestricted funding have led to some of RP's most innovative and impactful work.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in