Hide table of contents

This post is the executive summary for Rethink Priorities’ report on our work in 2023 and our strategy for the coming year. 

Please click here to download a 6-page PDF summary or click here to read the full report.

Executive Summary

Rethink Priorities (RP) is a research and implementation group. We research pressing opportunities and implement solutions to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to address key issues. We do this work in close partnership with a variety of organizations including foundations and impact-focused nonprofits. This year’s highlights include:

  • Early traction we have had on AI governance work
  • Exploring how risk aversion influences cause prioritization 
  • Creating a cost-effectiveness tool to compare different causes
  • Foundational work on shrimp welfare
  • Consulting with GiveWell and Open Philanthropy (OP) on top global health and development opportunities

Key updates for us this year include:

All our published research can be found here.[1] Over 2023, we worked on approximately 160 research pieces or outputs. Our research directly informed grants made by other organizations of a volume at least similar to the one of our operating budget (i.e., over $10M).[2] Further, through our Special Projects program, we supported 11 external organizations and initiatives with $5.1M in associated expenditures. We have reason to think we may be influencing grantmakers, implementers, and other key stakeholders in actions that aren't immediately captured in either that grants influenced or special projects expenditures sum. We have also completed work for ~20 different clients, presented at more than 15 academic institutions, and organized six of our own in-person convenings of stakeholders. 

By the end of 2023, RP will have spent ~$11.4M.[3] We predict a revenue of ~$11.7M over 2023, and predict assets of ~$10.3M at year's end. We will have made 14 new hires over 2023, for a total of 72 permanent staff at year's end,[4] corresponding to ~70 full-time equivalent (FTE) staff.[5] The expenditure distributions for our focus areas over the year in total were as follows: 29% of our resources were spent working on animal welfare, 23% on artificial intelligence, 16% on global health and development, 11% on Worldview Investigations, 10% on our existential security work[6], and 9% on surveys and data analysis which encompasses various causes.[7]

Some of RP’s key strategic priorities for 2024 are: 1) continuing to strengthen our reputation and relations with key stakeholders, 2) diversifying our funding and stakeholders to scale our impact, and 3) investing greater resources into other parts of our theory of change beyond producing and disseminating research to increase others’ impact. To accomplish our strategic priorities, we aim to hire for new senior positions. 

Some of our tentative plans for next year are:

  • Creating key pieces of animal advocacy research such as a cost-effectiveness tracking database for chicken welfare campaigns, and annual state of the movement report for the farmed animal advocacy movement.   
  • Addressing perhaps critical windows for AI regulations by producing and disseminating research on compute governance, and lab governance. 
  • Consulting with more clients on global health and development interventions to attempt to shift large sums of money in effective fashion.      
  • Helping launch new projects that aim to reduce existential risk from AI. 
  • Being an excellent option for any promising projects seeking a fiscal sponsor. 
  • Providing rapid surveys and analysis to inform high priority strategic questions. 
  • Examining how foundations may best allocate resources across different causes, perhaps by creating a tool that inputs user values across different views.    

The gap between our current funding and the funding we would need to achieve our 2024 plans is several hundred thousand dollars. To further quantify the size of our funding gaps, this report outlines three scenarios over two years: 1) no growth, 2) low growth (~7.5% growth next year), and 3) moderate growth (~15% growth next year). For each scenario, we roughly estimate the total gap, including our goals for diversifying our funding, i.e. by receiving grants from funders other than OP (which is currently our largest funder) as well as targets for donors giving less than $100,000. The total current funding goals for non-Open Philanthropy funders under the growth scenarios through year-end 2024 range from ~$500,000 to ~$1.4M. Note these amounts assume we maintain 12 months of reserves at the end of 2024 for our work throughout 2025. 

Our cause area, excluding Open Phil funding goals through year-end 2024 for the no-growth scenario are shown above.[8]

This report concludes with some reasons to consider funding RP.

The appendix then mainly provides background context on the organization, a somewhat fuller list of outputs by area, as well as some financial statements (balance sheet, and 2023 expenses). 

*** 

Some readers may also be interested in our upcoming webinars: 

  1. ^

    Please subscribe to our newsletter if you want to hear about job openings, events, and research. Note that a little over half of our research is not publicly accessible either due to client confidentiality or due to lack of capacity to publish the work publicly.

  2. ^

     This and all other dollar amounts in this review are in USD.

  3. ^

    Note too that this $11.4M and the other amounts referred to in this paragraph aren’t inclusive of the amount that Special Projects of Rethink Priorities (e.g., Epoch, The Insect Institute, Apollo, etc.) spend.  

  1. ^

     This exact number may be slightly off due to any staff transitions late this year. Note too that we also worked, to differing extents, with close to 30 contractors throughout the year.

  2. ^

     With roughly 47 FTE focused on research, 19 FTE on operations and communications, and five FTE on Special Projects focused on fiscal sponsorship and new project incubation.

  3. ^

     Formerly called General Longtermism.

  4. ^

     The time allocation across departments fairly closely matches the financial distributions.

  5. ^

     Note that we have proportionately split our operations team's costs across all these areas. 

Show all footnotes
Comments7


Sorted by Click to highlight new comments since:

Thanks for sharing, Kieran!

It is possible to donate specifically to a single area of RP? If yes, to which extend would the donation be fungible with donations to other areas?

Thanks for the question, Vasco :) 

It is possible to donate specifically to a single area of RP?

Yes. Donors can restrict their donations to RP. When making the donation, the donor should just mention what restriction is on the donation, and then we will restrict those funds for only that use in our accounting.

If yes, to which extend would the donation be fungible with donations to other areas?

The only way this would be fungible is if it changes how we allocate unrestricted money. Based on our current plans, this would not happen for donations to our animal welfare or longtermist work but could happen for donations to other areas. If this is a concern for you, please flag that and we can actually go and increase the budget for the area by the size of your donation, thus fully eliminating all fungibility concerns completely.

We take donor preferences very seriously and do not think fungibility concerns should be a barrier to those giving to RP. That being said, we do appreciate those that trust us to allocate money to where we think it is needed most.

Thanks for clarifying!

The priorities section requires permission to access.

Should be fixed now. 

Has RP published anything laying out its current plans for funding and/or executing work on AI governance and existential risk? I was surprised to see that no RP team members are listed under existential security or AIGS anymore. Is RP's plan going forward that all work in these areas will be carried out by special projects initiatives rather than the core RP team? And how does that relate to funding? When RP receives unrestricted donor funds, are those sometimes regranted to incubated orgs within special projects? 

Thanks for your comment and questions!

RP is still involved in work on AI and existential risk. This work now takes place internally at RP on our Worldview Investigations Team and externally via our special projects program.

Across the special projects program in particular we are supporting over 50 total staff working on various AI-related projects! RP is still very involved with these groups, from fundraising to comms to strategic support and I personally dedicate almost all of my time to AI-related initiatives.

As part of this strategy, our team members who were formerly working in our "Existential Security Team" and our "AI Governance and Strategy" department are doing their work under a new banner that is better positioned to have the impact that RP wants to support.

We don't grant RP unrestricted funds to special projects, so if you want to donate to them you would have to restrict your donation to them. RP unrestricted funds could be used to support our Worldview Investigation Team. Feel free to reach out to me or to Henri Thunberg henri@rethinkpriorities.org if you want to learn more.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f