I went though the old emails today and I am confident that my description accurately captured what happened and that everything I said can be backed up.
Another animal advocacy research organization supposedly found CE plagiarizing their work extensively including in published reports, and CE failed to address this.
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
I believe this refers to an incident that happened in 2021. CE had an ongoing relationship with an animal advocacy policy organisation occasionally providing research to support their policy work. We received a request for some input and over the next 24 hours we helped that policy org...
Thank you so much for an excellent post.
I just wanted to pick up on one of your suggested lessons learned that, at least in my mind, doesn’t follow directly from the evidence you have provided.
You say:
These wins suggest a few lessons. ... the value of cross-party support. Every farm animal welfare law I’m aware of, globally, passed with cross-party support. ... We should be able to too: there are many more conservative animal lovers than liberal factory farmers.
To me, there are two very opposing ways you could take this. Animal-ag industry is benefiting fr...
I can't speak for Lewis but as an animal advocate running an organisation, my bigger concern is that politicisation is irreversible and destroys option value.
Thanks, this is a good point. I agree that it's not obvious we should choose A) over B).
My evidence for A) is that it seems to be the approach that worked in every case where farm animal welfare laws have passed so far. Whereas I've seen a lot of attempts at B), but never seen it succeed. I also think A) really limits your opportunities, since you can only pass reforms when liberals hold all key levers of power (e.g. in the US, you need Democrats to control the House, Senate, and Presidency) and they agree to prioritize your issue.
My sense is that most his...
Have you considered blinded case work / decision making? Like one person collects the key information annonomises it and then someone else decides the appropriate responce without knowing the names / orgs of the people involved.
Could be good for avoiding some CoIs. Has worked for me in the past for similar situations.
Thank you Saulius. Very helpful to hear. This sounds like a really positive story of good management of a difficult situation. Well done to Marcus.
If I read between the lines a bit I get the impression that maybe more junior (be that less competent or just newer to the org) managers at Rethink with less confidence in their actions not rocking the Rethink<->funder relationship were perhaps more likely to put unwelcome pressure on researchers about what to publish. Just a hypothesis, so might be wrong. But also the kind of thing good internal policies, good onboarding, good senior example setting, or just discussions of this topic, can all help with.
No, sorry, I wasn't saying that. My manager was Jacob Peacock, he was a great manager. He didn't put any unwelcome pressure and wasn't the one who talked to me about the email to OpenPhil. He said that I can publish my WAW articles on behalf of RP but then Marcus disagreed.
[Edit: as per Saulius' reply below I was perhaps to critical here, especially regarding the WAW post, and it sounds like Saulius thinks that was manged relatively well by RP senior staff]
I found this reply made me less confident in Rethink's ability to address publication bias. Some things that triggered my 'hmmm not so sure about this' sense were:
It sounds like Rethink stopped Saulius from posting a WAW post (within a work context) and it also looks like there was a potential conflict of interest here for senior staff as posting could affect funding
It is true that I wasn’t allowed to publish some of my WAW work on behalf of RP. Note that this WAW work includes not only the short summary Why I No Longer Prioritize Wild Animal Welfare (which got a lot of upvotes) but also three longer articles that it summarises (this, this, and this). Some of these do not threaten RP’s funding in any way. ...
Hi Peter, Rethink Priorities is towards the top of the places I'm considering giving this year. This post was super helpful. And these projects look incredible, and highly valuable.
That said I have a bunch of questions and uncertainties I would love to get answers to you before donating to Rethink.
1. What is your cost/benefit. Specially I would love to know any or all of:
- How will you prioritise amongst the projects listed here with unrestricted funds from small donors? Most of these projects I find very exciting, but some more than others. Do you have a rough priority ordering or a sense of what you would do in different scenarios, like if you ended up with unrestricted funding of $0.25m/$0.5m/$1m/$2m/$4m etc how you would split it between the projects you list?
I think views on this will vary somewhat within RP leadership. Here I am also reporting my own somewhat quick independent impressions which could update upon f...
- Can you assure me that Rethink's researchers are independent?
Yes. Rethink Priorities is devoted to editorial independence. None of our contracts with our clients include editorial veto power (except that we obviously cannot publish confidential information) and we wouldn't accept these sorts of contracts.
I think our reliance on individual large clients is important, but overstated. Our single largest client made up ~47% of our income in 2023 and we're on track for this to land somewhere between 40% and 60% in 2024. This means in the unlikely event tha...
Rethink's cost per published research report that is not majority funded by "institutional support"
Our AI work, XST work, and our GHD work were entirely funded by institutions. Our animal welfare work was mostly funded by institutions. However, our survey work and WIT work were >90% covered by individual donors, so let's zoom in on that.
The survey department and WIT department produced 31 outputs in 2023 against a spending of $2,055,048.78 across both departments including all relevant management and operations. This is $66,291.90 per report.
Notably ...
Hi Sam,
Thanks for the detailed engagement! I am going to respond to each with a separate reply.
Rethink's cost per researcher year on average, (i.e. the total org cost divided by total researcher years, not salary).
I think the best way to look at this is marginal cost. A new researcher hire costs us ~$87K USD in salary (this is a median, there is of course variation by title level here) and ~$28K in other costs (e.g., taxes, employment fees, benefits, equipment, employee travel). We then need to spend ~$31K in marginal spending on operations and ~$28K i...
Can you assure me that Rethink's researchers are independent?.
I no longer work at RP, but I thought I'd add a data point from someone who doesn't stand to benefit from your donations, in case it was helpful.
I think my take here is that if my experience doing research with the GHD team is representative of RP's work going forwards, then research independence should not be a reason not to donate.[1]
My personal impression is that of the work that I / the GHD team has been involved with, I have been afforded the freedom to look for our best guess of what...
Hi Marcus thanks very helpful to get some numbers and clarification on this. And well done to you and Rethink for driving forward such important research.
(I meant to post a similar question asking for clarification on the rethink post too but my perfectionism ran away with me and I never quite found the wording and then ran out of drafting time, but great to see your reply here)
Hi Emily, Sorry this is a bit off topic but super useful for my end of year donations.
I noticed that you said that OpenPhil has supported "Rethink Priorities ... research related to moral weights". But in his post here Peter says that the moral weights work "have historically not had institutional support".
Do you have a rough very quick sense of how much Rethink Priorities moral weights work was funded by OpenPhil?
Thank you so much
We mean to say that the ideas for these projects and the vast majority of the funding were ours, including the moral weight work. To be clear, these projects were the result of our own initiative. They wouldn't have gone ahead when they did without us insisting on their value.
For example, after our initial work on invertebrate sentience and moral weight in 2018-2020, in 2021 OP funded $315K to support this work. In 2023 they also funded $15K for the open access book rights to a forthcoming book based on the topic. In that period of 2021-2023, for public-fa...
Hi, Debugging worked. It was a Chrome extension I had installed to hide cookie messages what was killing it. Thank you so much!!
Hi. I started drafting a reply but had to stop and now a week later I cannot find where I was drafting it. I would love to be able to see all the places where I have draft comments/replies autosaved. Thank you!
Can this be updated. This is the default "Contact us" page (if I click the sidebar on the right and click "contact us" it brings me here). But this page seems very out of date. Could be worth updating it.
There is no intercom bubble on the right nor is there a "hide intercom" button on the edit profile page. There is a hide intercom button on the account settings page but it does not do anything. There are also a bunch of comments saying similar stuff below but they have not been replied too.
Alternatively, one might adjust ambiguous probability assignments to reduce their variance. For example, in a Bayesian framework, the posterior expectation of some value is a function of both the prior expectation and evidence that one has for the true value. When the evidence is scant, the estimated value will revert to the prior.[30] Therefore, Bayesian posterior probability assignments tend to have less variance than the original ambiguous estimate, assigning lower probabilities to extreme payoffs.[31]
Would you say this is being ambigui...
Sorry to be annoying but after reading the post "Animals of Uncertain Sentience" I am still very confused about the scope of this work
My understanding is that any practical how to make decisions is out of the scope of that post. You are only looking at the question of whether the tools used should in theory be aiming to maximise true EV or not (even in the cases where those tools do not involve calculating EV).
If I am wrong about the above do let me know!
Basically I find phrases like"EV maximization decision procedure" and "using EV maximisation to make th...
"The team is only a few months old and the problems you're raising are hard"
Yes a full and thorough understanding of this topic and rigorous application to cause prioritisation research would be hard.
But for what it's worth I would expect there are easy some quick wins in this area too. Lots of work has been done outside the EA community just not applied to cause prioritisation decision making, at least that i have noticed so far...
Amazing. Super helpful to hear. Useful to understand what you are currently covering and what you are not covering and what the limits are. I very much hope that you get the funding for more and more research
I am very very excited to see this research it's the kind of thing that I think EAs should be doing a lot more of and it seems shocking that it takes us more than a decade to get round to such basic fundamental questions on cause prioritisation. Thank you so much for doing this.
I do however have one question and one potential concern.
Question: My understanding from reading the research agenda and plan here is that you are NOT looking into the topic of how best to make decisions under uncertainty (Knightian uncertainty, cluelessness, etc). It looks ...
I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.
We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum.
That fact combined with funders who were (and maybe still are) somewhat against funding people (e...
Thank you Asya for all the time and effort you have put in here and the way you have manged the fund. I've interacted with the LTFF a number of times and you have always been wonderful: incredibly helpful and sensible.
Thanks Linch. Agree feedback is time consuming and often not a top priority compared to other goals.
These short summary reasons in this post forwhy grants are not made are great and very interesting to see.
Was wondering do the unsuccessful grant applicants tend to recieve this feedback (of the paragraph summary kind in this post) or do they just get told "sorry no funding"?
I wonder if this could help the situation. I think if applicants have this feedback, and if other granters know that applicants get feedback they can ask for it. I've definitely been asked "where else did you apply and what happened" and been like "I applied for x grant and got feedbac...
I can't speak about all cases, but I think for most cases in the rough cluster of situations like the above, we do not currently give reasons for rejection at the level of granularity of the above. I'm a bit sad about this but I think it's probably the right call. I remember a specific situation some months ago where I wrote fairly elaborate feedback for an applicant but I was dissuaded from sending it, in retrospect for probably the right reasons.
If we have something like 3x the current grantmaker capacity, I'd love for us to give more feedback, but...
1.
I really like this list. Lost of the ideas look very sensible.
I also really really value that you are doing prioritisation exercises across ideas and not just throwing out ideas that you feel sound nice without any evidence of background research (like FTX, and others, did). Great work!
– –
2.
Quick question about the research: Does the process consider cost-effectiveness as a key factor? For each of the ideas do you feel like you have a sense of why this thing has not happened already?
– –
3.
Some feedback on the idea here I know...
Also keen on this.
Specifically, I would be interested in someone carrying out an independent impact report for the APPG for Future Generations and could likely offer some funding for this.
why is Tetlock-style judgmental forecasting so popular within EA, but not that popular outside of it?
The replies so far seem to suggest that groups outside of EA (journalists, governments, etc) are doing a smaller quantity of forecasting (broadly defined) than EAs tend to.
This is likely correct but it is also the case that groups outside of EA (journalists, governments, etc) are doing different types of forecasting than EAs tend to. There is less "Tetlock-style judgmental" forecasting and more use of other tools such as horizon scanning, scenario planning,...
Hi John.
Thank you for the feedback and comments.
On deforestation. Just to be clear the result of our prioritisation exercise was our top recommendations (ideas 1-2) on subscription models for new antibiotics and stopping dangerous dual use research. The ideas 4-7 (including the deforestation one) did well in our early prioritisation but ultimately we did not recommend them. I have made a minor edit to the post to try to make this clearer.
The stopping deforestation report idea was originally focused on limiting the human animal interface to prevent zo...
I would have to check this with Akhil, the lead author, but my understanding is that this CEA compares a case where the PASTEUR act passes with a business as usual case where very few (but not zero) new anitbiotics are developed.
I agree this is probably overly-optimistic as we can and probably should assume that someone is likely to do something about antibiotic resistance in the next few decades. Good spot!
And thank you for the great questions and for looking over things in such detail.
Hi Ben, happy to justify this. I was responsible for the alternate estimate of 10%-17.5%
– –
These numbers here are consistent with our other estimates of policy change. Other estimates were easier (but not easy) to justify as they were in better evidenced areas and tended to range between 5% and 40% (see 2022 ideas here). Areas with the best data were road safety policy, where we looked at looked at 84 case studies finding a 48% chance of policy success, and food fortification policy, where we looked at 62 case studies (in Annex) with a 47% chance of succes...
My entry:
Modern slavery
(Disclaimer the following is my initial impressions based on 2 minutes of Googling, cannot promise accuracy)
Scale – 400k-1million people are in slavery in the DRC. They lead horrendous lives suffer a myriad of terrible health conditions and are not free. The number is huge, more than die of Malaria each year, more than die of AIDs each year. EAs have looked into US criminal justice but there might be nearly as many slaves in the DRC as there are prisoners in the US and ALL of them are being held unjustly and likely suffer in ma...
Curious if you have a sense of the geographic scope of these needs / talent gaps?
(Policy development work can be extremely country dependent. The same person could be highly qualified to do this work in the UK and highly underqualified to do it for the US, or India, or China or Finland, etc).
Thanks :-)
It is an honour to work with you too!!
Huge thanks to Austin and the Manifold Markets team for making this collaboration possible. It has been a pleasure to work with you and your support has been invaluable.
[Added a thanks to the post]
Is there info about why grantors didn't give more funding to HH?
I don’t have this info. I think it is possible that funders are not interested in Africa (HH was working in Kenya) or that funders don’t value this kind of work as they see it as incremental welfare improvements that they don’t lead to long run change, but I'm mostly honestly speculating ...
Animal charities. Most suffering in the world happens in farms. See recommendations for where to donate by Animal Charity Evaluators.
To address extreme suffering in humans then consider:
Hi,
You might be interested that Healthier Hens recently said "Healthier Hens (HH) has scaled down due to not being able to secure enough funding to provide a sufficient runway to pilot dietary interventions effectively." https://forum.effectivealtruism.org/posts/6eaY7MEDWnK39sCEi/healthier-hens-y1-5-update-and-scaledown-assessment
HH seems like an organisation that is counterfactually funding constrained and meets your criteria (underfunded, focused on measurable chicken welfare, no veganism promotion, potentially highly impactful).
This kind of case a...
This is THE BEST news I've heard for animals in a while. So exciting.
Maybe I'm optimistic but I do think this opens up a really viable strategy for gradual policy shifts across the US starting in states where sales bans are most tractable ...
This seems exciting.
I think the EU AI act is somewhat irrelevant to AGI stuff given it is predominantly usage based so seems to skirt around the challenges of more general AI.
But I expect it will prompt member states (in this case Spain) to set up AI regulators and if that is done well the process put in place could form useful models for wider adoption (e.g. in the US) which would act as a layer of defence in depth against AGI risk.
Will you be working on inputting into this. If not why not / what barriers do you face?
Mostly EA policy research in global development has stuck to health policy.
There is a bunch of research within the field of global health policy. Some good (independently evaluated / granted to by EA-aligned organisations) places to give are:
All of which are listed here: https://www.givewell.org/research/public-health-regulation-update-August-2021 with GiveWell's reasoning for thinking it is effective.
You might a...
Thank so much you for writing this I think it is an excellent piece and makes a really strong case for how longtermists should consider approaching policy. I agree with most of your conclusions here.
I have been working in the space for a number of years advocating (with some limited successes) for a cost effectiveness approach to government policy making on risks in the UK (and am a contributing author to the Future Proof report your cite). Interestingly despite having made progress in the area I am over time leaning more towards work on specific advocacy ...
We separately looked at two ideas on new technology:
(We found this breakdown useful as the problems are different. The current patent system does not work for antimicrobials due to the need to limit the use of last line novel antibiotics. The current patent system works better for preparing for future pandemics but has limits as the pay out is uncertain and might not happen within the life-time of a p...
Hi Nick, Great to hear from you and to get your on-the-ground feedback. I lead the research team at CE.
These are all really really great points and I will make sure they are all noted in the implementation notes we produce for the (potential) founders.
All our ideas have implementation challenges, but we think that delivering on these ideas is achievable and we are excited to find and train up potential founders to work on them!!
–-–
One point of clarification, in case it is not clear: on kangaroo care we are recommended an approach of providing ...
Yeah, I somewhat agree this would be a challenge, and there is a trade off between the time needed to do this well and carefully (as it would need to be done well and carefully) and other things that could be done.
I think it would surprise a lot if the various issues were insurmountable. I am not an expert in how to publish public evaluations of organisations without upsetting those organisations or misleading people but connected orgs like GiveWell do this frequently enough and must have learnt a thing or two about it in the past few years. To take one th...
There's a lot of policy work, it's just not getting identified.
In Biorisk, Openphil funds Center for Health Security, NTI, and Council on Strategic Risks. In AI, they fund GovAI, CNAS, Carnegie, and others. Those are all very policy-heavy.
Thanks for the useful post Holden.
I think it would be great to see the full published tiered list.
In global health and development funders (i.e. OpenPhil and Givewell) are very specific about the bar and exactly who they think is under it and who they think is over it. Recently global development funders (well GiveWell) have even actively invited open constructive criticism and debate about their decision making. It would be great to have the same level of transparency (and openness to challenge) for longtermist grant making.
Is there a plan to publish the ...
I think there are various reasons for not having such a list public:
If grantee concerns are a reason against doing this, you could allow grantees to opt into having their tiers shared publicly. Even an incomplete list could be useful.
I'd personally happily opt in with the Atlas Fellowship, even if the tier wasn't very good.
If a concern is that the community would read too much into the tiers, some disclaimers and encouragement for independent thinking might help counteract that.
Also, I wonder if we should try (if we can find the time) cowriting a post on giving and receiving critical feedback on EA. Maybe we diverge in views too much and it would be a train wreck of a post but it could be an interesting exercise to try, maybe try to pull out toc. I do agree there are things that both I think I and the OP authors (and those responding to the OP) could do better
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
– –
Quality of our reports
I would like to push back a bit on Joey's response here. I agree that our research is quicker scrappier and goes into less depth than other orgs, but I am not convinced that our reports have more errors or worse reasoning that reports of other organisations (thinking of non-peer reviewed global health and animal welfare organisations like GiveWell, OpenPhil, Animal Charity Evaluators, Rethink Priorities, Founders Pl... (read more)
I think it is quite clear that a lot of your research isn't at the bar of those other organizations (though I think for the reasons Joey mentioned, that definitely can be okay). For example, I think in this report, collapsing 30 million species with diverse life histories into a single "Wild bug" and then taking what appear to be completely uncalibrated guesses at their life conditions, then using that to compare to other species is just well below the quality standards of other organizations in the space, even if it is a useful way to get a quick sense of things.