Co-Excutive Director of Rethink Priorities
Thanks for the question, but unfortunately we can not share more about those involved or the total.
I can say we're confident this unlocked millions for something that otherwise wouldn't have happened. We think maybe half of the money moved would not have been spent, and some lesser amount would have been spent on less promising opportunities from an EA perspective.
Thanks for the question and the kind words. However, I don’t think I can answer this without falling back somewhat on some rather generic advice. We do a lot of things that I think has contributed to where we are now, but I don’t think any of them are particularly novel:
As to your ideas about the possibility of RP’s success being high founder quality, I think Peter and I try very hard to do the best we can but I think in part due to survivorship bias it’s difficult for me to say that we have any extraordinary skills others don’t possess. I’ve met many talented, intelligent, and driven people in my life, some of whom have started ventures that have been successful and others who have struggled. Ultimately, I think it’s some combination of these traits, luck, and good timing that has lead us to be where we are today.
Thanks for the question! I think describing the current state will hint at a lot on what might make us change the distribution, so I’m primarily going to focus on that.
I think the current distribution of what we work on is dependent on a number of factors, including but not limited to:
In a sense, I think we’re cause neutral in that we’d be happy to work on any cause provided the good opportunities arise to do so. We do have opinions on high level cause prioritization (though I know there’s some disagreement inside RP about this topic) but I think given the changing nature of marginal value of additional work in any given the above considerations, and others, we meld our work (and staff) to where we think we can have the highest impact.
In general, though this is fairly generic and high level, were we to come to think our in a given area wasn’t useful or the opportunity cost were too high to continue to work on it, we would decide to pursue other things. Similarly, if the reverse was true for some particular possible projects we weren’t working on, we would take them on
Given we know so little about their potential capacities and what alters their welfare, I’d suggest the potential factory farming of insects is potentially quite bad. However, I don’t know what methods are effective at discouraging people from consuming them, though some of the things you suggest seem plausible paths here. I think it is pretty hard to say much on the tractability of these things, without further research.
Also, we are generally keen to hear from folks who are interested in doing further work on invertebrates. And, personally, if you know of anyone interested in working on things like this I would encourage them to apply to be ED of the Insect Welfare Project.
I would like to see more applications in the areas outlined in our RFP and I’d encourage anyone with interest in working on those topics to contact us.
More generally, I would like to see far more people and funding engaged in this area. Of course, that’s really difficult to accomplish. Outside of that, I’m not sure I’d point to anything in particular.
We don’t have a cost-effectiveness estimate of our grants. The reason as to why not, is it’s likely very difficult to produce, and while it could be useful, we're not sure it's worth the investment for now.
On who to be in touch with, I would suggest such a prospective student is in touch with groups like GFI and New Harvest if they would like advice on attempting to find advisors for this type of work.
On advice, I would generally stay away from career advice. If forced to answer, I would not give general advice that everyone or most people are better off attempting to do as high impact research as soon as is feasible.
I think we’re looking for promising projects and one clear sign of that is often a track-record of success. The more challenging the proposal, the more something like this might be important. However, we’re definitely open to funding people without a long track record if there are other reasons to believe the project would be successful.
Personally, I’d say good university grades alone is probably not a strong enough signal, but running or participating in successful small projects on a campus might be particularly if the projects were similar in scope or size to what was being proposed, and/or this person had good references on their capabilities from people we trusted.
The case of a nonprofit with a suboptimal track record is harder for me in the abstract. I think it depends a lot on the group’s track record and just how promising we believe the project to be. If a group has an actively bad track record, failing to produce what they’ve been paid to do or producing work of negative value, I’d think we’d be reluctant to fund them even if they were working in an area we considered promising. If the group was middling, but working in a highly promising area, I’d guess we would be more likely to fund them. However, there is obviously much grey area between these two poles and I think it really depends on the details of the proposal and track record of the group in determining whether we’d think such a project would be worth funding.
We grade all applications with the same scoring system. For the prior round, after the review of the primary and secondary investigator and we’ve all read their conclusions, each grant manager gave a score (excluding cases of conflict of interests) of +5 to -5, with +5 being the strongest possible endorsement of positive impact, and -5 being a grant with an anti-endorsement that’s actively harmful to a significant degree. We then averaged across scores, approving those at the very top, and dismissing those at the bottom, largely discussing only those grants that are around the threshold of 2.5 unless anyone wanted to actively make the case for or against something outside of these bounds (the size and scope of other grants, particularly the large grants we approve, is also discussed).
That said, in my mind, grants for research are valuable to the extent they unlock future opportunities to directly improve the welfare of animals. Of course, figuring out whether, or how much, that’s feasible with any given research grant can be very difficult. For direct work, you can, at least in theory, relatively straightforwardly try to estimate the impact on animals (or at least the range of animals impacted). We try to estimate plausible success and return on animal lives improved for both but given these facts there are some additional things I think we keep in mind. Some considerations:
There are other considerations, notably that research and direct work may have different counterfactual support options depending on the topic. There may be less funders interested in supporting certain types of research (say, non-academic work on neglected animals) and more on other topics that may be more established.
I don’t think it is true the EA AW Fund is essentially neartermist, though this may depend somewhat on what you mean. We definitely consider grants that have potential long term payoffs beyond the next few decades. In my opinion, much of the promise of PBM and cultivated meat relies on impacts that would be 15-100 years away and there’s no intrinsic reason held, for me and I believe other funders, to discount or not consider other areas for animal welfare that would have long term payoffs.
That said, as you suggest in (2), I do think it is true that it makes sense for the LTFF to focus more on thinking through and funding projects that involve what would happen assuming AGI were to come to exist. A hypothetical grant proposal which is focused on animal welfare but depends on AGI would probably make sense for both funds to consider or consult each other on and it would depend on the details of the grant as to whose ultimate domain we believe it falls under. We received applications at least somewhat along these lines in the prior grant round and this is what happened.
Given the above, I think it’s fair to say we would consider grants with reasoning like in your post, but sometimes the ultimate decision for that type of grant may make more sense to be considered for funding by the LTFF.
On the question of what I think of the moral circle expansion type arguments for prioritizing animal welfare work within longtermism, I’ll speak for myself. I think you are right that the precise nature of how moral circles expand and whether such expansion is unidimensional or multidimensional is an important factor. In general, I don’t have super strong views on this issue though so take everything I say here to be stated with uncertainty.
I’m somewhat skeptical, to varying degrees, about the practical ability to test people’s attitudes about moral circle expansion in a reliable enough way to gain the kind of confidence needed to determine if that’s a more tractable way to influence long run to determine, as you suggest it might, whether to prioritize clean meat research or advocacy against speciesism, which groups of animals to prioritize, or which subgroups of the public to target if attempting outreach. The reason for much of this skepticism (as you suggest as a possible limitation of this argument) is largely the transferability across domains and cultures, and the inherent wide error bars in understanding the impact of how significantly different facts of the world would impact responses to animal welfare (and everything else).
For example, supposing it would be possible to develop cost-competitive clean meat in the next 30 years, I don’t know what impact that would have on human responses to wild animal welfare or insects and I wouldn’t place much confidence, if any, in how people say they would respond to their hypothetical future selves facing that dilemma in 30 years (to say nothing of their ability to predict the demands of generations not yet born to such facts). Of course, reasons like this don’t apply to all of the work you suggested doing and, say, surveys and experiments on existing attitudes of those actively working in AI might tell us something about whether animals (and which animals if any) would be handled by potential AI systems. Perhaps you could use this information to decide we need to ensure non-human-like minds to be considered by those at elite AI firms.
I definitely would encourage people to send us any ideas that fall into this space, as I think it’s definitely worth considering seriously.