We mean to say that the ideas for these projects and the vast majority of the funding were ours, including the moral weight work. To be clear, these projects were the result of our own initiative. They wouldn't have gone ahead when they did without us insisting on their value.
For example, after our initial work on invertebrate sentience and moral weight in 2018-2020, in 2021 OP funded $315K to support this work. In 2023 they also funded $15K for the open access book rights to a forthcoming book based on the topic. In that period of 2021-2023, for public-fa...
Hey Vasco, thanks for the thoughtful reply.
I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests. I would go one further and say I think if you have a series of axioms and it proposes something like fanaticism, this should at least potentially count against that combination of axioms. That said, I definitely think, as Hayden Wilkinson pointed out in his In Defence of Fanaticism paper, there are many weaknesses with alternatives to EV.
Also, the idea that fanaticism doesn’t come up i...
Thanks for the engagement, Michael.
I largely agree with your notes and caveats.
However, on this:
...Expected utility maximization can be guaranteed to avoid fanaticism while satisfying the standard EUT axioms (and countable extensions), with a bounded utility function and the bounds small enough or marginal returns decreasing fast enough, in relative terms… In my view, expected utility with a bounded utility function (not difference-making) is the most instrumentally rational of the options, and it and boundedness with respect to differences seem the most pr
In trying to convince people to support global health charities I don't think I've ever gotten the objection "but people in other countries don't matter" or "they matter far less than Americans", while I expect vegan advocates often hear that about animals.
I have gotten the latter one explicitly and the former implicitly, so I'm afraid you should get out more often :).
More generally, that foreigners and/or immigrants don't matter, or matter little compared to native born locals, is fundamental to political parties around the world. It's a banal take in ...
David's post is here: Perceived Moral Value of Animals and Cortical Neuron Count
What do you think of this rephrasing of your original argument:
I suspect people rarely get deeply interested in the the value of foreign aid unless they come in with an unusually high initial intuitive view that being human is what matters, not being in my country... If you somehow could convince a research group, not selected for caring non-Americans, to pursue this question in isolation, I'd predict they'd end up with far less foreign aid-friendly results.
I think this arg...
Maybe. We're a little unsure about this right now. The code base for this is part of the bigger Cross-Cause Cost-Effectiveness Model which we haven't made a final determination on whether we will release it.
Jeff, are you saying you think "an intuition that a human year was worth about 100-1000 times more than a chicken year" is a starting point of "unusually pro-animal views"?
In some sense, this seems true relative to most humans' implied views by their actions. But, as Wayne pointed out above, this same critique could apply to, say, the typical American's views about global health and development. Generally, it doesn't seem to buy much to frame things relative to people who've never thought about a given topic substantively and I don't think you'd think this...
Thanks for the question, but unfortunately we can not share more about those involved or the total.
I can say we're confident this unlocked millions for something that otherwise wouldn't have happened. We think maybe half of the money moved would not have been spent, and some lesser amount would have been spent on less promising opportunities from an EA perspective.
Thanks for the question and the kind words. However, I don’t think I can answer this without falling back somewhat on some rather generic advice. We do a lot of things that I think has contributed to where we are now, but I don’t think any of them are particularly novel:
Thanks for the question! I think describing the current state will hint at a lot on what might make us change the distribution, so I’m primarily going to focus on that.
I think the current distribution of what we work on is dependent on a number of factors, including but not limited to:
In a sense, I think we’re cause neutral in that we’d be happy to work on any cause provi...
Given we know so little about their potential capacities and what alters their welfare, I’d suggest the potential factory farming of insects is potentially quite bad. However, I don’t know what methods are effective at discouraging people from consuming them, though some of the things you suggest seem plausible paths here. I think it is pretty hard to say much on the tractability of these things, without further research.
Also, we are generally keen to hear from folks who are interested in doing further work on invertebrates. And, personally, if you know of...
I would like to see more applications in the areas outlined in our RFP and I’d encourage anyone with interest in working on those topics to contact us.
More generally, I would like to see far more people and funding engaged in this area. Of course, that’s really difficult to accomplish. Outside of that, I’m not sure I’d point to anything in particular.
We don’t have a cost-effectiveness estimate of our grants. The reason as to why not, is it’s likely very difficult to produce, and while it could be useful, we're not sure it's worth the investment for now.
On who to be in touch with, I would suggest such a prospective student is in touch with groups like GFI and New Harvest if they would like advice on attempting to find advisors for this type of work.
On advice, I would generally stay away from career advice. If forced to answer, I would not give general advice that everyone or most people are better off attempting to do as high impact research as soon as is feasible.
I think we’re looking for promising projects and one clear sign of that is often a track-record of success. The more challenging the proposal, the more something like this might be important. However, we’re definitely open to funding people without a long track record if there are other reasons to believe the project would be successful.
Personally, I’d say good university grades alone is probably not a strong enough signal, but running or participating in successful small projects on a campus might be particularly if the projects were similar in scope or s...
We grade all applications with the same scoring system. For the prior round, after the review of the primary and secondary investigator and we’ve all read their conclusions, each grant manager gave a score (excluding cases of conflict of interests) of +5 to -5, with +5 being the strongest possible endorsement of positive impact, and -5 being a grant with an anti-endorsement that’s actively harmful to a significant degree. We then averaged across scores, approving those at the very top, and dismissing those at the bottom, largely discussing only those grant...
I don’t think it is true the EA AW Fund is essentially neartermist, though this may depend somewhat on what you mean. We definitely consider grants that have potential long term payoffs beyond the next few decades. In my opinion, much of the promise of PBM and cultivated meat relies on impacts that would be 15-100 years away and there’s no intrinsic reason held, for me and I believe other funders, to discount or not consider other areas for animal welfare that would have long term payoffs.
That said, as you suggest in (2), I do think it is true that it make...
In the just completed round we got several applications from academics looking to support research on plant-based and cultivated meat projects though we ultimately decided not to support any of them. We definitely welcome grant applications in this area and our new requests for proposals explicitly calls for applications on work in this space. Additionally, I would direct them to consider applying to GFI’s alternative protein research grants, and the Food Systems Research Fund, among other locations, if they believe they have promising projects in this sp...
What new charities do you want to be created by EAs?
I don't have any strong opinions about this and it would likely take months of work to develop them. In general, I don't know enough to suggest that it is desirable that new charities work in areas I think could use more work rather than existing organizations up their work in those domains.
What are the biggest mistakes Rethink Priorities did?
Not doing enough early enough to figure out how to achieve impact from our work and communicate with other organizations and funders about how we can work together.
Thanks for the questions!
If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
I think this depends on many factual beliefs you hold, including what groups of creatures count and what time period you are concerned about. Restricting ourselves to the present and assuming all plausibly sentient minds count (and ignoring extremes, say, less than 0.1% chance), I think farm and wild animals are plausibly candidates for enduring...
Thanks for the question, Edo!
We keep a large list of project ideas, and regularly add to it by asking others for projects ideas including staff, funders, advisors, and organizations in the spaces we work in.
Hey Edo, thanks for the question!
We've had some experience working with volunteers. In the past, when we had less operational support than we do now, we found it challenging to manage and monitor volunteers but we think it's something that we're better placed to handle now so may explore again in the coming years, though we are generally hesitant about depending on free labor.
We've not really had experience publicly outsourcing questions to the EA community, but we regularly consult wider EA communities for input on questions we are working on. Finally, and I'm not sure this is what you meant, but we've also partnered with Metaculus on some forecasting questions.
Hey Josh, thanks for the question!
From first principles, our allocation depends on talent fit, the counterfactual value of our work, fundraising, and, of course, some assessment of how important we think the work is, all things considered.
At the operational level, we set targets as percentage of time we want to spend on each cause area based on these factors and we re-evaluate based on that as our existing commitments, the data, and as changes in our opinions about these matters warrant.
I think it's going great! I think our combined skillset is a big pro when reviewing work, considering project ideas. In general, I think bouncing ideas off each other improves and sharpens our ideas. We are definitely able to cover more depth and breadth with the two of us than if only one person was leading the organization.
Additionally, Peter and I get along great and I enjoy working alongside him everyday (well, digitally anyway given we are remote).
Thanks for the question!
We hire for fairly specific roles, and the difference between those we do hire and don't isn't necessarily as simple as those brought on being better as researchers overall (to say nothing of differences in fit or skill across causes).
That said, we generally prioritize ability in writing, general reasoning, and quantitative skills. That is we value the ability to uncover and address considerations, counter-points, meta-considerations on a topic, produce quantitative models and do data analysis when appropriate (obviously this is mor...
Thanks for the questions!
On (1), we see our work in WAW as currently doing three things: (1) foundational research (e.g., understanding moral value and sentience, understanding well-being at various stages of life), (2) investigating plausible tractable interventions (i.e., feasible interventions currently happening or doable within 5 years), and (3) field building and understanding (e.g., currently we are running polls to see how "weird" the public finds WAW interventions).
We generally defer to WAI on matters of direct outreach (both academic and general ...
Thanks for the question!
I think the short answer is this what we think of doing projects in the improving the collective understanding space depends on a number of factors including the nature of the project, and the probability of that general change in perspective leading to actions changed in the future, and how important it would be if that change occurred.
One very simplistic model you can use to think about possible research projects in this area is:
Hey I'm happy to see this on the forum! I think farmed shrimp interventions are a promising area and this report highlights some important considerations. I should note that Rethink Priorities has also been researching this topic for a while and I won't go into detail as I'm not leading up this work and the person who is currently is on leave, but I think we've tentatively come to some different conclusions about the most promising next steps in this domain.
In the future, if anyone reading this is inclined to work on farmed shrimp, in addition to reviewing this report I'd hope you'd read over our forthcoming work and/or reach out to us about this area.
I think 1 and 2 should result in the exact same experiences (and hence same intensity) since the difference is just some neurons that didn't do anything or interact with the rest of the brain, even though 2 has a greater proportion of neurons firing. The claim that their presence/absence makes a difference to me seems unphysical, because they didn't do anything in 1 where they were present.
I'm unclear why you think proportion couldn't matter in this scenario.
I've written a pseudo program in Python below in which proportion does matter, removing neuron...
We in fact do (1) then (2). However, to continue your example, donations to animal work still end up going to animals. If it were the case, say, that we hit the animal total needed for 2020 before the overall total, additional animal donations would go to animal work for 2021.*
It is true in this scenario that in 2020 we'd end up spending less unrestricted funding on animals, but the total spent on animals that year wouldn't change and the animal donations for 2020 would not then be spent on non-animal work.
*We would very much state publicly when we have no more room for further donations in general, and by cause area.
Internally, as part of Rethink Charity, we have fairly standard formal anti-harassment, discrimination, and reasonable accommodation policies. That is, we comply with all relevant anti-discrimination laws, including [Title VII of the Civil Rights Act of 1964, Americans with Disabilities Act (ADA) and Age Discrimination in Employment Act (ADEA.)] We explicitly prohibit offensive behavior (e.g. derogatory comments towards colleagues of a specific gender or ethnicity.)
We also provide a way for any of our staff to offer anonymous feedback and information to se
...Thanks for the question. We have forthcoming work on ballot initiatives which will hopefully be published in January and other work that we plan to keep unpublished (though accessible to allies) for the foreseeable future.
In addition, we have some plans to investigate potentially high value policies for animal welfare.
On CE's work, we communicate with them fairly regularly about their work and their plans, in addition to reading and considering the outputs of their work.
I honestly don’t know. I’d probably be doing research at another EA charity, or potentially leading (or trying to lead) a slightly different EA charity that doesn’t currently exist. Generally, I have previously seriously considered working at other EA organizations but it's been some time since I've seriously considered this topic.
Thanks for the question and thanks for the compliment about our work! As to the impact of the work, from our Impact survey:
Invertebrate sentience was the second most common (13) piece of work that changed beliefs. Also the second largest number of changed actions of all our work (alongside EA survey) including 1 donation influenced, 1 research inspiration, and 4 unspecified actions.
Informally, I could add many people (probably >10) in the animal welfare space have personally told me they think our work on invertebrates changed their opinion about invert
...We have raised half his salary for 2020 and 2021 on a grant explicitly for this purpose. If you’d like to talk more about this, I’d be happy for you to shoot me an email: marcus [at] rtcharity.org
Thanks for the question! We do research informed by input from funders, organizations, and researchers that we think will help funders make better grants and help direct work organizations do to higher impact work.
So our plans for distribution vary by the audience in question. For funders and particular researchers we make direct efforts to share our work with them. Additionally, we try to regularly have discussions about our work and priorities with the relevant existing research EA communities (researchers themselves and org leaders). However, as we've s
...Thanks for the question! We do not view our work as necessarily focused on the West. To the extent our work so far has focused on such countries, it's because that's where we think our comparative advantage currently has centered but as our team learns, and possibly grows, this won't necessarily hold over time.
Thanks for the question! To echo Ozzie, I don't think it's fair to directly compare the quality of our work to the quality of GPI's work given we work in overlapping but quite distinct domains with different aims and target audiences.
Additionally, we haven't prioritized publishing in academic journals, though we have considered it for many projects. We don't believe publishing in academic journals is necessarily the best path towards impact in the areas we've published in given our goals and don't view it as our comparative advantage.
All this said, we don'
...My ranges represent what I think is a reasonable position is on the probability of each creatures sentience given all current input and expected future input. Still, as I said:
...the range is still more of a guideline for my subjective impression than a declaration of what all agents would estimate given their engagement with the literature
I could have made a 90% subjective confidence interval, but I wasn't confident enough that such an explicit goal in creating or distributing my understanding would be helpful.
I meant to highlight a case where I downgraded my belief in a scenario in which there multiple ways to update on a piece of evidence.
To take an extreme example for purposes of clarification, suppose you begin with a theory of sentience (or a set of theories) which suggests behavior X is possibly indicative of sentience. Then, you discover behavior X is possessed by entities you believe are not sentient, say, rocks. There are multiple options here as to how to reconcile these beliefs. You could update towards thinking rocks are sentient, or you could downgr
...I think the proposed karma system, particularly when combined with the highly rated posts being listed higher, is a quite bad idea. In general, if you are trying to ensure quality of posts and comments while spreading the forum out more broadly there are hard tradeoffs with different strengths and weaknesses. Indeed, I might prefer some type of karma weighting system to overly strict moderation but even then the weights proposed here don't seem justifiable.
What problem is being solved by giving up to 16 times maximum weight that would not be solved with gi...
It might be helpful if you elaborated more on what you mean by 'aim for neutrality'. What >actions would that entail, if you did that, in the real world, yourself?
I meant picking someone with no stake whatsoever in the outcome. Someone who, though exposed to arguments about AI risk, has no strong opinions one way or another. In other words, someone without a strong prior on AI risk as a cause area. Naturally, we all have biases, even if they are not explicit, so I am not proposing this as a disqualifying standard, just a goal worth shooting for.
An ev...
Such personal incentives are important but, again, I didn't advocate getting someone hostile to AI risk. I proposed aiming for someone neutral. I know, no one is "truly" neutral but you have to weigh potential positive personal incentives of someone invested against potential motivated thinking (or more accurately in this case, "motivated selection").
I don't disagree on the problems of getting someone who thinks there is "negligible probability" of AI causing extinction being not suited for the task. That's why I said to aim for neutrality.
But I think we may be disagreeing over whether "thinks AI risk is an important cause" is too close to "is broadly positive towards AI risk as a cause area." I think so. You think not?
This survey makes sense. However, I have a few caveats:
Think that AI risk is an important cause, but have no particular convictions about the best >approach or organisation for dealing with it. They shouldn't have worked for MIRI in the past, but >will presumably have some association with the general rationality or AI community.
Why should the person overseeing the survey think AI risk is an important cause? Doesn't that self-select for people who or more likely to be positive toward MIRI than whatever the baseline is for all people familiar with...
It's complicated, but I don't think it makes sense to have a probability distribution over probability distributions, because it collapses. We should just have a probability distribution over outcomes.
I did mean over outcomes. I was referring to this:
If we're uncertain about Matthews propositions, we ought to place our guesses somewhere closer to 50%. To do otherwise would be to mistake our deep uncertainty deep scepticism.
That seems mistaken to me but it could be because I'm misinterpreting it. I was reading it as saying we should split the differ...
I think you are short selling Matthews on Pascal's Mugging. I don't think his point was that you must throw up your hands because of the uncertainty, but that he believes friendly AI researchers have approximately the same amount of evidence that AI research done today will have a 10^-15 chance of saving the existence of future humanity as any infinitesimal but positive chance.
Anyone feel free to correct me, but I believe in such a scenario spreading your prior evenly over all possible outcomes wouldn't arbitrarily just include spitting the difference betw...
This is super practical advice that I can definitely see myself applying in the future. The introductions on the sheets seem particularly well-suited to getting people engaged.
Also, "What is the first thing you would do if appointed dictator of the United States?" likely just entered my favorite questions to ask anyone in ice-breaker scenarios, many of which have nothing to do with EA.
Hey Saulius,
I’m very sorry that you felt that way – that wasn’t our intention. We aren’t going to get into the details of your resignation in public, but as you mention in your follow up comment, neither this incident, nor our disagreement over WAW views were the reason for your resignation.
As you recall, you did publish your views on wild animal welfare publicly. Because RP leadership was not convinced by the reasoning in your piece, we rejected your request to publish it under the RP byline as an RP article representative of an RP position. This decision... (read more)
Probably not the right place to discuss it, but at some point I'd be interested in both the object level question of whether marginal wild animal welfare research should be funded and the more meta question of what RP WAW employees and ex-employees believe on this issue.
[Edit: as per Saulius' reply below I was perhaps to critical here, especially regarding the WAW post, and it sounds like Saulius thinks that was manged relatively well by RP senior staff]
I found this reply made me less confident in Rethink's ability to address publication bias. Some things that triggered my 'hmmm not so sure about this' sense were:
- The reply did not directly address the claims in Saulius's comment. E.g. "I'm sorry you feel that way" not "I'm sorry”. No acknowledgement that if, as Saulius claimed, a senior staff told him that it was wrong to
... (read more)I get RP's concerns that an individual researcher's opinions not come across as RP's organizational position. However, equal care needs to be given to the flipside -- that the donor does not get the impression that a response fully reflects the researcher's opinion when it has been materially affected by the donor-communication policy.
I'm not suggesting that management interference is inappropriate . . . but the donor has the right to know when it is occurring. Otherwise, if I were a donor/funder, I would have to assume that all communications from R... (read more)
Thank you for your answer Marcus.
What bothers me is that if I said that I was excited about funding WAW research, no one would have said anything. I was free to say that. But to say that I’m not excited, I have to go through all these hurdles. This introduces a bias because a lot of the time researchers won’t want to go through hurdles and opinions that would indirectly threaten RP’s funding won’t be shared. Hence, funders would have a distorted view of researchers' opinions.
Put yourself into my shoes. OpenPhil sends an email to multiple people askin... (read more)