Hide table of contents

Hi everyone! I’ll be doing an Ask Me Anything (AMA) here. Feel free to drop your questions in the comments below. I will aim to answer them by Monday, July 24


Who am I? 

I’m Peter. I co-founded Rethink Priorities (RP) with Marcus A. Davis in 2018. Previously, I worked as a data scientist in industry for five years. I’m an avid forecaster. I’ve been known to tweet here and blog here.


What does Rethink Priorities do? 

RP is a research and implementation group that works with foundations and impact-focused non-profits to identify pressing opportunities to make the world better, figures out strategies for working on those problems, and does that work.

We focus on:

What should you ask me? 


I oversee RP’s work related to existential security, AI, and surveys and data analysis research, but I can answer any question about RP (or anything).

I’m also excited to answer questions about the organization’s future plans and our funding gaps (see here for more information). We're pretty funding constrained right now and could use some help!

We also recently published a personal reflection on what Marcus and I have learned in the last five years as well as a review of the organization’s impacts, future plans, and funding needs that you might be interested in or have questions about. 

RP’s publicly available research can be found in this database. If you’d like to support RP’s mission, please donate here or contact Director of Development Janique Behman.

To stay up-to-date on our work, please subscribe to our newsletter or engage with us on TwitterFacebook, or LinkedIn



Sorted by Click to highlight new comments since: Today at 3:26 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Doing some napkin-math:

  • Rethink published 32 pieces of research in 2022 (according to your database)
  • I think roughly (?) half of your work doesn't get published as it's for specific clients, so let's say you produced 64 reports overall in 2022.
  • Rethink raised $10.7 million in 2022.
  • That works out to around $167k per research output.

That seems like a lot! Maybe I should discount a bit as some of this might be for the new Special Projects team rather than research, but it still seems like it'll be over $100k per research output. 

Related questions:

  • Do you think the calculations above are broadly correct? If not, could you share what the ballpark figures might actually be? Obviously, this will depend a lot on the size of the project and other factors but averages are still useful! 
  • If they are correct, how come this number is so high? Is it just due to multiple researchers spending a lot of time per report and making sure it's extremely high-quality? FWIW I think the value of some RP projects is very high - and worth more than the costs above - but I'm still surprised at the costs.
  • Is the cost something you're assessing when you decide whether to take on a research project (when it'
... (read more)

Hi James,

Thanks for your thoughtful question, but I think you’re thinking about this incorrectly for a few reasons:

Firstly, while we raised $10.7M, most of that was earmarked for 2023 as we usually raise money in the current year for the following year. In 2022, we spent around $6.8M on RP core programs, not including special projects and operations to support special projects.

Secondly, we actually have published less than half of our 2022 research. My rough guess is that in 2022 we produced over 100 pieces of work, not ~64 as you estimate. This is for two reasons:

  • Some research is confidential for whatever reason and is never intended to be published

  • Some research is intended to be published but we haven’t had the resources or time to publish it yet because public outputs are not a priority for our clients and their funding does not cover it (this is actually something we’d love to get money from the EA public for).

To give a more clear substitute figure, we generally say that $20K-$40K pays for a typical short-term research project and $70K-$100K pays for a typical in-depth research project.

But more importantly I'd add that counting outputs per dollar is not a good way to v... (read more)

Love the question

Relatedly, how much of the funding (both for 2022 and for 2024) is for the production of research outputs, compared to how much it is for other operations (like fiscal sponsorships or incubation)?

I think for marginal donations on RP, perhaps the best way to think about this would be in the cost to produce marginal research. A new researcher hire would cost ~$87K in salary (median, there is of course variation by title level here) and ~$28K in other costs (e.g., taxes, employment fees, benefits, equipment, employee travel). We then need to spend ~$31K in marginal spending on operations and ~$28K in marginal spending on management to support a new researcher. So the total cost for one new FTE year of research ends up being ~$174K. I think if you want to get a sense of how much it costs to support research at RP and how that balances between operations and other costs, this is a useful breakdown to look at.

In addition to research and operations, I’d say we produce roughly four other categories of things: fiscal sponsorship, incubated organizations, internal events, and external conferences. Let me go into a bit of detail about that:

  • Fiscal sponsorship arrangements pay for themselves out of the sponsored org’s budget, so they’re not something we’d seek public funding for.

  • Incubation work, or work to produce and advise new organizations based on our research (e.g., Condor Ca

... (read more)
I think this really depends on the research output. $100k for a report with roughly one person year's worth of effort seems about right. Or roughly one good academic paper or master's thesis. I suspect a lot of Rethink's reports are more valuable than that. That's $100k all in cost, including costs that aren't specific to a project. Including salary, overheads, taxes, travel, any expenses, training, recruitment etc.
Nathan Young
Do you have a sense of how much funding this informed?

I’m guessing what you mean is something like “One of RP’s aims is to advise grantmaking. How many total dollars of grantmaking have you advised?” You might then be tempted to take this number, divide it by our costs, and compare that to other organizations. But this is a tricky question to answer actually, since it never has been as straightforward of a relationship as I’d expect for a few reasons:

  • Our advice is marginal and we never make a sole and final decision on any grant. Also the amount of contribution varies a lot between grants. So you need some counterfactually-adjusted marginal figure.

  • Sometimes our advice leads to grantmakers being less likely to make a grant rather than more likely… how does that count?

  • The impact value of the grants themselves is not equal.

  • Some of our research work looks into decisions but doesn’t actually change the answer. For example, we look into an area that we think isn’t promising and confirm it isn’t promising so in absolute terms we got nowhere but the hits-based fact that it could’ve gone somewhere is valuable. It’s hard to figure out how to quantify this value.

  • A large portion of our research builds on itself. For example, our in

... (read more)

Why does it make sense for Rethink Priorities to host research related to all five of the listed focus areas within one research org? It seems like they have little in common (other than, I guess, all being popular EA topics)?

Peter Wildeford
I agree this is confusing. I get into this in my answer to Sebastian Schmidt.

We spoke a little at EAG London about how people underestimate the mental health challenges people face in EA, especially among the most senior people. You indicated a willingness to talk about it publicly. If you're still up for it, could you tell us more about your own personal mental health over the past few years and your perceptions of what mental health is like amongst other effective altruists in leadership positions?

It was an AMA similar to this one where Will Macaskill revealed that he took antidepressant medication and that actually had a large impact on me. I have historically struggled with anxiety and depression and Will’s response contributed a large portion of the reason why I chose to ask my doctor about SSRIs in 2019. Luckily they worked and hopefully by sharing my experience I can pay this forward.

Howie Lempel has also been very open about his experience. I think mental health concerns are common among EA “leaders” and I think they have been pretty open about it. I hope that continues and we could always use more.

I have been lucky to find antidepressants, talk therapy, regular exercise, and proactively engaging with a supportive friend group to be a great combination to alleviate the ways in which anxiety would otherwise derail my day. I encourage other people suffering from these conditions to explore these options.

Anxiety and depression will still be a lifelong struggle for me. Even with all of this there are still a few days a year where I am so anxious and depressed that I sleep for sixteen hours and barely get out of bed. But it’s much less worse due to me being lucky enough to have effective treatment.

John Salter
I hope this answer inspires others as Will's inspired you.

RP seems to have a somewhat unique view among research organisations in identifying a funding gap rather than a talent gap for research staff. I would be very curious why you think this is the case and how you have solved the talent constraints.

I disagree; last I checked most AI safety research orgs think they could make more good hires with more money and see themselves as funding-constrained-- at least all 4 that I'm familiar with: RP, GovAI, FAR, and AI Impacts.

Edit: also see the recent Alignment Grantmaking is Funding-Limited Right Now (note that most alignment funding on the margin goes to paying and supporting researchers, in the general sense of the word).

Peter Wildeford
I agree with Zach’s comment that other organizations are also underfunded and so this is not a unique view among RP. See also my comment to Aaron Bergman on donation opportunities. I think my comment to Sebastian Schmidt also helps answer this question and gives a bit more context about how and why RP has been less focused on talent gaps historically.

What’s are some questions you hope someone’s gonna ask that seem relatively unlikely to get asked organically?

Bonus: what are the answers to those questions?

Peter Wildeford
Honestly I love this question but I got asked a lot of real questions that I think were varied and challenging, so right now I don't currently feel like I need even more!

Is RP research donor-driven in terms of priorities? Do you worry that Rethink could become vastly more focused on some cause areas over others due to available funding in the space, as opposed to more neglected areas that could be more impactful?

Peter Wildeford
I do think RP, like nearly any other organization, has to “follow the money” and given that RP has historically relied a lot on restricted assets we do end up matching donor priorities. I think this could be good as it gives an independent check on our prioritization and encourages us to be responsive to the needs of the broader dollar-weighted EA community. On the other hand, it is unlikely that donor priorities are indeed the best thing for us to work on and as we are funding constrained in all of our areas I do worry that we will be steered toward particular areas more than I’d like from impact assessment alone. This is one reason why we’ve been trying to make a large push to get more unrestricted funding for RP.

Aside from RP, what is your best guess for the org that is morally best to give money to?

I feel a lot of cluelessness right now about how to work out cross-cause comparisons and what decision procedures to use. Luckily we hired a Worldview Investigations Team to work a lot more on this, so hopefully we will have some answers soon.

In the meantime, I currently am pretty focused on mitigating AI risk due to what I perceive as both an urgent and large threat, even among other existential risks. And contrary to last year, I think AI risk work is actually surprisingly underfunded and could grow. So I would be keen to donate to any credible AI risk group that seems to have important work and would be able to spend more marginal money now.

As Co-CEO of RP, I am obligated to say that our AI Governance and Strategy Department is doing this work and is actively seeking funding. Our work on Existential Security and doing surveys is also very focused on AI and are also funding constrained. You can donate to RP here.

…but given that you asked me specifically for non-RP work here is my ranked list of remaining organizations:

  1. Centre for Long-Term Resilience (CLTR) does excellent work and appears to me to be exceptionally well-positioned and well-connected to meet the large
... (read more)

Have you considered doing an Animal Charity Evaluators review? I personally think Rethink puts out some of the most important animal-related research out there! 

Thanks for the compliment! We have considered a few times but ultimately have declined to take the opportunity to review due to:

  • There are capacity limitations on our end.

  • We have concerns around how Rethink Priorities would be viewed by ACE’s audience given that we do a lot of research work in many different areas.

  • We like the opportunity to be constructively critical of ACE’s research work and like that they are also willing to challenge and push back on our research work. We are concerned this dynamic might get complicated if we are in a clear reviewer-reviewee relationship.

We do work with ACE a lot and are excited to continue to work with them. We'd definitely consider doing an ACE review in future years if invited. We also hope that fans of our work will consider supporting us financially even if we don't have an ACE top charity designation!

What is some RP research that you think was extremely important or view-changing but got relatively little attention from the EA community or relevant stakeholders?

Hi everyone! I'm sorry I didn't get to all the questions today - it was more work than I anticipated to put together. I will answer more tomorrow and I will keep going until everything has an answer!

What are some of your proudest 'impact stories' from RP's research? E.g. you did research on insects and now X funders will dedicate $Y million to insect welfare 

Are there any notable differences in your ability to have impact in the different areas you conduct research? E.g. one area where important novel insights are easier / harder, or one area where relevant research is more easily translated into practice

Yes. I think animal welfare remains incredibly understudied and thus it is easier to have a novel insight, but also there is less literature to draw from and you can end up more fundamentally clueless. Whereas in global health and development work there is much more research to draw from, which makes it nicer to be able to do literature reviews to turn existing studies and evidence into grant recommendations, but also means that a lot of the low-hanging fruit has been done already.

Similarly, there is a lot more money available to chase top global health interventions relative to animal welfare or x-risk work, but it is also comparably harder to improve recommendations as a lot of the recommendations are already pretty well-known by foundations and policymakers.

AI has been an especially interesting place to work in because it has been rapidly mainstreaming this year. Previously, there was not much to draw on but now there is much more to draw from and many more people are open to being advised on work in the area. However, there are also many more people trying to get involved and work is being produced at a very rapid pace, which can make it harder to keep up and harder to contribute.

Re existential security, what are your AGI timelines and p(doom|AGI) like, and do you support efforts calling for a global moratorium on AGI (to allow time for alignment research to catch up / establish the possibility of alignment of superintelligent AI)?


As for existential risk, my current very tentative forecast is that the world state at the end of 2100 to look something like:

73% - the world in 2100 looks broadly like it does now (in 2023) in the same sense that the current 2023 world looks broadly like it did in 1946. That is to say of course there will be a lot of technological and sociological change between now and then but by the end of 2100 there still won't be unprecedented explosive economic growth (e.g.., >30% GWP growth per year), no existential disaster, etc.

9% - the world is in a singleton state controlled by an unaligned rogue AI acting on its own initiative.

6% - the future is good for humans but our AI / post-AI society causes some other moral disaster (e.g., widespread abuse of digital minds, widespread factory farming)

5% - we get aligned AI, solve the time of perils, and have a really great future

4% - the world is in a singleton state controlled by an AI-enabled dictatorship that was initiated by some human actor misusing AI intentionally

1% - all humans are extinct due to an unaligned rogue AI acting on its own initiative

2% - all humans are extinct due to something else on this list (e.g., some ot... (read more)

This is interesting and something I haven't seen much expressed within EA. What is happening in the 8% where the humans are still around and the unaligned singleton rogue AI is acting on it's own initiative? Does it just take decades to wipe all the humans out? Are there digital uploads of (some) humans for the purposes of information saving?[1] Is a ceiling on intelligence/capability hit upon by the AI which means humans retain some economic niches? Is the misalignment only partial, so that the AI somehow shares some of humanity's values (enough to keep us around)? Does this mean that you think we get alignment by default? Or alignment is on track to be solved on this timeline? Or somehow we survive misaligned AI (as per the above discrepancy between your estimates for singleton unaligned rogue AI and human extinction)? As per my previous comment, I think the default outcome of AGI is doom with high likelihood (and haven't received any satisfactory answers to the question If your AGI x-risk estimates are low, what scenarios make up the bulk of your expectations for an OK outcome?   1. ^ This still seems like pretty much an existential catastrophe in my book, even if it isn't technically extinction.
Vasco Grilo
Thanks for elaborating, Peter! Do you mind sharing how you obtained those probabilities? Are they your subjective guesses?

Re existential security, what are your AGI timelines

I have trouble understanding what “AGI” specifically refers to and I don’t think it’s the best way to think about risks from AI. As you may know, in addition to being co-CEO at Rethink Priorities, I take forecasting seriously as a hobby and people actually for some reason pay me to forecast, making me a professional forecaster. So I think a lot in terms of concrete resolution criteria for forecasting questions and my thinking on these questions has actually been meaningfully bottlenecked right now by not knowing what those concrete resolution criteria are.

That being said, being a good thinker also involves having to figure out how to operate in some sort of undefined grey space, and so I should be at least somewhat comfortable enough with compute trends, algorithmic progress, etc. to be able to give some sort of answer. And so I think for the type of AI that I struggle to define but am worried about – the kind that has the capability of autonomously causing existential risk – the kind of AI that AI researcher Caroline Jeanmaire refers to as the “minimal menace” – I am willing to tentatively put the following distribution on t... (read more)

Thanks for your detailed answers Peter. Caroline Jenmaire's "minimal menace" is a good definition of AGI for our purposes (but also so is Holden Karnofsky's PASTA, OpenPhil's Transformative AI and Matthew Barnett's TAI.) I'm curious about your 5% by 2035 figure. Has this changed much as a result of GPT-4? And what is happening in the remaining 95%? How much of that is extra "secret sauce" remaining undiscovered? A big reason for me updating so heavily toward AGI being near (and correspondingly, doom being high given the woeful state-of-the-art in Alignment) is the realisation that there very well may be no additional secret sauce necessary and all that is needed is more compute and data (read: money) being thrown at it (and 2 OOMs increase in training FLOP over GPT-4 is possible within 6-12 months). How likely do you consider this to be, conditional on business as usual? I think things are moving in the right direction, but we can't afford to be complacent. Indeed we should be pushing maximally for it to happen (to the point where, to me, almost anything else looks like "rearranging deckchairs on the Titanic"). Whilst I may not be a professional forecaster, I am a successful investor and I think I have a reasonable track record of being early to a number of significant global trends: veganism (2005), cryptocurrency (several big wins from investing early - BTC, ETH, DOT, SOL, KAS; maybe a similar amount of misses but overall up ~1000x), Covid (late Jan 2020), AI x-risk (2009), AGI moratorium (2023, a few days before the FLI letter went public).
Peter Wildeford
I'm definitely interested in seeing these ideas explored, but I want to be careful before getting super into it. My guess is that a global moratorium would not be politically feasible. But pushing for a global moratorium could still be worthwhile to pursue even if it is unlikely to happen as it could be a good galvanizing ask that brings more general attention to AI safety issues and make other policy asks seem more reasonable by comparison. I'd like to see more thinking about this. On the merits of the actual policy, I am unsure whether a moratorium is a good idea. My concern is may just produce a larger compute overhang which could increase the likelihood of future discontinuous and hard-to-control AI progress. Some people in our community have been convinced that an immediate and lengthy AI moratorium is a necessary condition for human survival, but I don't currently share that assessment.

Good to see that you think the ideas should be explored. I think a global moratorium is becoming more feasible, given the UN Security Council meeting on AI, The UK Summit, the Statement on AI risk, public campaigns etc.

Re compute overhang, I don't think this is a defeater. We need the moratorium to be indefinite, and only lifted when there is a global consensus on an alignment solution (and perhaps even a global referendum on pressing go on more powerful foundation models).

Some people in our community have been convinced that an immediate and lengthy AI moratorium is a necessary condition for human survival, but I don't currently share that assessment.

This makes sense given your timelines and p(doom) outlined above. But I urge you (and others reading) to reconsider the level of danger we are now in[1].

  1. ^

    Or, ahem, to rethink your priorities (sorry).

What are your thoughts, for you personally, around...
I) Time spent 
II) Joy of use 
III) Value of information gained
of Manifold vs Metaculus?

I use both Manifold and Metaculus every day and it’s not really clear to me which I spend time on more. The answer is “a lot” to both.

For joy of use, I think Manifold has worked hard to make the forecasting process very seamless and I like that. I also like the gamification of the mana profit system. That being said, I think the questions on Metaculus tend to be more interesting. I personally like having rigorous resolution criteria and I personally prefer being able to give my true probabilities rather than bet up or down. So Metaculus might suit my personality better.

Surprisingly I don’t really have a clear read which platform is more accurate. So I think the value of information is optimized by using both platforms. I’m keen to see this researched more.

By answering this question I should disclose that Metaculus pays me money for being a forecaster. I suppose Manifold also indirectly pays me money because RP is part of their Manifold for Charity program. So my feelings towards them are not exactly unbiased.

You said in your "Five years" post that you are planning to do more self-eval and impact assessments, and I strongly encourage this. What are the most realistic bits of evidence you could get from an impact report of Rethink Priorities which would cause you to dramatically update your strategy? (or, another generator: what are you most worried about learning from such assessments?)

What do you think the ideal ratio in terms of resource allocation between thinking/research and doing/action in EA would be? (I recognize those categories are ill-defined, and some activities won't comfortably fall into either bucket. But they seem discrete enough to make a question about balancing different kinds of work worthwhile.)

Peter Wildeford
I think it varies a lot by cause area but I think you would be unsurprised to hear me recommend more marginal thinking/research. I think we’re still pretty far from understanding how to best allocate a doing/action portfolio and there’d still be sizable returns from thinking more.

Rethink feels unique among EA orgs - it's large, not attached to a university, not a foundation. Why aren't there more standalone research shops? Should there be?

RP’s arrangement here is definitely not unique to EA, though I do agree we may be the largest EA-affiliated non-univeristy non-foundation research organization, as my guess is we are a little larger than GiveWell by FTE headcount. Though adding all those caveats ends up with me not saying very much, kinda like talking about being the largest private Catholic university in Vermont.

I think university affiliations definitely matter, especially for getting your work in front of policymakers. My guess is that research organizations choose to affiliate with a university when they can for this reason, and it’s a good one.

But I also like not having to worry about the bureaucracy that comes with interfacing with a university and I think this has historically allowed RP to be more agile and grow faster. I think it’s important that EA have both university and non-university research organizations.

(Obviously everyone would love to be attached to a multi-billion dollar foundation and if we can get more of those we obviously should, but I assume that’s not really an option.)

How has your experience as co-CEO been? How do you share responsibilities? Would you recommend it to other orgs?

I’ve personally liked it. There have been several times when I’ve talked with my co-CEO Marcus about whether one of us should just become CEO and it’s never really made sense. We work well together and the co-CEO dynamic creates a great balance between our pros and cons as leaders – Marcus leads the organization to be more deliberate and careful at the cost of potentially going too slowly and I lead the organization to be more visionary at the cost of potentially being too chaotic.

Right now we split the organization very well where Marcus handles the portfolios pertaining to Global Health and Development, Animal Welfare, and Worldview Investigations… and I handle the portfolios pertaining to AI Governance and Strategy, Existential Security (AI-focused incubation), and Surveys and Data Analysis (currently also mostly AI policy focused right now though you may know us mainly from the EA Survey).

I’m unsure if I’d recommend it to other orgs. I think most times it wouldn’t make sense. But I think it does make sense when there are two co-founders with an equally natural claim and desire to claim the CEO mantle, when they balance each other well, and when there is some sort of clear split and division of responsibility.

Hi Peter, thanks for your work. I have several questions:

  1. Most organizations within EA are relatively small (<20). Why do you think that's the case and why is RP different?
  2. How do you decide on which research areas to focus on and, relatedly, how do you decide how to allocate money to them?
  3. What do you focus on within civilizational resilience?
  4. How do you decide whether something belongs to the longtermism department (i.e., whether it'll affect the long-term future)?

How do you decide on which research areas to focus on and, relatedly, how do you decide how to allocate money to them?

We do broadly aim to maximize the cost-effectiveness of our research work and so we focus on allocating money to opportunities that we think are most cost-effective on the margin.

Given that, it may be surprising that we work in multiple cause areas, but we face some interesting constraints and considerations:

  • There is significant uncertainty about which priority area is most impactful. The general approach to RP has been that we can scale up multiple high-quality research teams in a variety of cause areas easier than we can figure out which cause area we ought to prioritize. Though we recently hired a Worldview Investigations Team to work a lot more on the broader question of how to allocate an EA portfolio. We also are investing a lot more in our own impact assessment. Together we hope that these will give us more insights into how to allocate our work going forward.

  • There may be diminishing returns to RP focusing on any one priority area.

  • A large amount of resources are not fungible across these different areas. The marginal opportunity cost to taking res

... (read more)
Thanks for this. I notice that all of these reasons are points in favor of working on multiple causes and seem to neglect considerations that would go in the other direction. And clearly, you take this considerations seriously too (e.g., scale and urgency) as you recently decided to focus exclusively on AI within the longtermism team now.

Most organizations within EA are relatively small (<20). Why do you think that's the case and why is RP different?

I’m not exactly sure and I think you’d have to ask some other smaller organizations. My best guess is that scaling organizations is genuinely hard and risky, and I can understand other organizations may feel that they work best and are more comfortable with being small. I think RP has been different by:

  • Working in multiple different cause areas lets us tap into multiple different funding sources, thus increasing the amount of money we would take. It also increased the amount of work we wanted to do and the amount of people we wanted to hire.

  • By being 100% remote-first from the beginning, we had a much larger talent pool to tap into. I think we’ve also been more willing to take chances on more junior-level researchers which has also broadened our talent pool. This allowed us to hire more.

  • I think just a general willingness and aspiration to be a big research organization and take on this risk, rather than intentionally go it slow.

This makes sense. Do you have any explicated intentions for how big you want to get?
Peter Wildeford
We haven’t had to make too many fine-grained decisions, so it hasn’t been something that has come up enough to merit a clear decision procedure. I think the trickiest decision was what to do with research aimed at understanding and mitigating the negative effects of climate change. The main considerations were questions like “how do our stakeholders classify this work” and “what is the probability of this issue leading to human extinction within the century” and both of those considerations led to climate change work falling into our “global health and development” portfolio. This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or non-xrisk longtermism.
Peter Wildeford
This year we’ve made an intentional decision to focus nearly all our longtermist work on AI due to our assessment of AI risk as both unusually large and urgent, even among other existential risks. We will revisit this decision in future years and to be clear this does not mean that we think other people shouldn’t work on non-AI x-risk or longtermism-work not oriented towards existential risk reduction. But that does mean we don’t have any current work on civilizational resilience right now. That being said, we do have some work on this in the past: * Linch did a decent amount of research and coordination work around exploring civilizational refuges but RP is no longer working on this project. * Jam has previously done work on far-UVC, for example by contributing to "Air Safety to Combat Global Catastrophic Biorisks". * We co-supported Luisa in writing "What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?" while she was a researcher at both Rethink Priorities and Forethought Foundation.

What's something about you that might surprise people who only know your public, "professional EA" persona?

Peter Wildeford
* I like pop music, like Ariana Grande and Olivia Rodriguo, though Taylor Swift is the Greatest of All Time. I went to the Eras Tour and loved it. * I have strong opinions about the multiple types of pizza. * I'm nowhere near as good at coming up with takes and opinions off-the-cuff in verbal conversations as I am in writing. I'm 10x smarter when I have access to the internet.

There is a sense that the journal system is obviously flawed and could be trivially improved. Why hasn't EA done this?

We publish lots of material, we have lots of resources. It seems possible to imagine building a few journals that run in a different way.

And even if others don't respect them, if EA orgs did and they were less onerous to publish to I imagine outsiders would start to.

I haven’t actually thought much about the academic journal system, though I’m interested in what David Reinstein (former RP staff member) has been doing with his Unjournal.

 Peter is one of the best people I know well. He is kind, empathetic, wise, hard working, well-calibrated, to name a few. Generally I want to be more like someone along one axis, whereas I wish I were more like Peter in many. I know that his character has been developed with work and over time so I'd like to commend him for this. And thank him for his hard work and the outputs of it. 

I guess that to the reader I'd say that Peter is good in ways you can see, but also as good in many ways that you can't - he gives good advice, he provides insight o... (read more)

I think this is a particularly good piece by Peter, though I am crying reading it. https://www.pasteurscube.com/for-samantha-a-eulogy/ 

Peter Wildeford
This is so very sweet - thank you!
Peter Wildeford
I like both! I developed squigglepy because I liked squiggle so much but RP really wanted to make use of the Python ecosystem. So now we get the best of both worlds! I am a strong supporter of QURI, especially because Rethink Priorities now provides them with fiscal sponsorship.
[This is 75% a joke, as Peter developed squigglepy based on QURI's squiggle]

(1) where do you think forecasting has its best use-cases? where do you think forecasting doesn't help, or could hurt?

interested in your answer both as the co-CEO of an organization making important decisions, and an avid forecaster yourself.


(2) what are RP's plans with the Special Projects Program?

Peter Wildeford
The plan for RP Special Projects is to continue to fiscally sponsor our existing portfolio of organizations and see how that goes and continue to build capacity to support additional organizations sometime in the future. Current marginal Special Projects time is going into exploring more incubation work with our Existential Security department.
Peter Wildeford
I'm actually surprisingly unsure about this, especially given how interested I am in forecasting. I think when it comes to actual institutional decision making it is pretty rare for forecasts to be used in very decision-relevant ways and a lot of the challenge comes from asking the right questions in advance rather than the actual skill of creating a good forecast. And a lot of the solutions proposed can be expensive, overengineered, and focus far too much on forecasting and not enough on the initial question writing. Michael Story gets into this well in "Why I generally don't recommend internal prediction markets or forecasting tournaments to organisations". I think something like "Can Policymakers Trust Forecasters?" from the Institute for Progress takes a healthier view about how to use forecasting. Basically, you need to take some humility about what forecasting can accomplish but explicit quantification of your views is a good thing and it is also really good for society generally to grade experts on their accuracy rather than their ability to manipulate the media system. Additionally, I do think that knowing about the world ahead seems generally valuable and forecasting still seems like one of the best ways to do that. For example, everything we know about existential risk essentially comes down to various kinds of forecasting. Lastly, my guess is that a lot of the potential of forecasting for institutional decision making is still untapped and merits further meta-research and exploration.

Do you think that promoting alternative proteins is (by far) the most tractable way to make conventional animal agriculture obsolete?

Do you think increasing public funding and support for alternative proteins is the most pressing challenge facing the industry?

Do you think there is expert consensus on these questions?

Peter Wildeford
Evidence for alternative proteins being the most tractable way to make conventional animal agriculture obsolete is fairly weak. For example, similar products (eg, plant-based milk, margarine) have not made their respective categories obsolete. Instead, we do have and we will continue to need a multi-pronged approach to transitioning conventional animal agriculture to a more just and humane system. ~ Alternative proteins is a varied landscape so I imagine that the bottlenecks will be pretty different depending on the particular product, company, and approach. Unfortunately I am not up to date on details with regard to the funding gaps in this area. ~ Unfortunately there is not. There also just aren't that many experts in this area in the first place.
Thanks so much for your insight! I learned a lot although I wish i would have been more clear and asked about the tractability of alternative proteins to price parity (instead of just the tractability of "promoting" them). Because: * Plant-based milks are still more expensive (source) and maybe not as nutritious (e.g. less calcium, B12, etc.) so I think they (and many other existing products) may not be a reliable indicator of the potential of this field to make "conventional animal agriculture obsolete". * I think their potential to replace factory farming lies in the viability of them becoming more/just as cheap, tasty, nutritious as conventical animal products but I'd love to know if you (and other experts) think that's probably a pipe dream. I'd love to be corrected if I'm wrong (although I'm sure you're very busy) and also wanted to say thanks again.

Dear Mr. Wildeford, 

To what extent your work depends in your own staff vs. the academic EA infraestructure? 

There are organizations as "Effective Thesis" that try to re-direct academic resources for EA research. Do you have any relation with those organizations? Is there any way for external collaboration with your organization? Could you elaborate your vision of how "in house" and "external research" shall be optimally combined in Rethink Priorities?

Thank you very much for you excellent work.

Kind regards,


Peter Wildeford
Rethink Priorities does collaborate with academic institutions, mainly through hiring academics as contractors to either do novel research and/or to review our work. My sense is that these academics we work with are not unusually likely to be EA-affiliated though and I wouldn’t say we use any EA-specific academic infrastructure.
Curated and popular this week
Relevant opportunities