Hide table of contents

Note: I found out that there have been detailed expert reviews of Founders Pledges work in the past. See Johannes Ackvas comment below. Also, despite the focus on Founders Pledge, this doesn't mean that I think that they are especially under-reviewed. More generally, I don't mean to criticise FP - I just wanted to share this argument and see what people think about it.

Last time I checked, I couldn't find any in-depth, expert reviews of the cost-effectiveness estimates of Founders Pledge. I know that there isn't one for their evaluation of the Clean Air Task Force. GivingGreen and SoGive have looked at some of them, but not in-depth. (They told me this in correspondence). So, assuming I didn't miss anything, there are two possibilities:

(i) they didn't have any such reviews or
(ii) they had reviews, but didn't publish them

Lets first assume (i) is the case.

There seem to be large potential benefits of getting reviewed. If such an expert would find out that FP is significantly off, then this is valuable information, because it might lead investors to change the amount they donate. If FP underestimated e.g. CATFs cost-effectiveness, they might shift funding from less effective opportunities towards the CATF and if they overestimated it's cost-effectiveness the reverse might happen. Either way, if an expert review uncovers that the size of the error is sufficiently large, it is not implausible that this would improve large funding decisions.

If, however, such an expert verifies FPs models, then this is valuable information too. In that case, their research seems much more trustworthy from the outside, which plausibly attracts more investors. This is especially true, for cost-effectiveness estimates that seem spectacular. Take the claim that CATF averts 1 ton of CO2e for less than a dollar. Many people outside of EA that I talked to were initially skeptical of this. (I'm not making any claim as to the reasonableness of that estimate, I am merely commenting on its public perception.)

So it seems like there are large potential benefits of getting an expert review for organisation like FoundersPledge (all I'm saying here might well apply to many other similar organisation that I'm not as familiar with).


The remaining question is then: are the expected benefits of an independent analysis justifying its costs? 

I assume that you can hire an expert researcher for less than 100$/hour and that such an analysis would take less than 4 full work weeks. At 40 hours/week the whole thing would cost less than 16.000 $. That seems unrealistically high, but let’s assume it's not. 

Estimating that tens to hundreds of millions of dollars are allocated on the basis of their recommendations, it still seems worth it, doesn’t it?

Edit: I have probably overestimated this amount quite a lot. See Linch's comment

Now lets assume that (ii) is the case - they had independent expert reviews, but didn't publish them. In that case, for the exact reasons given above, it would be important to make them public.

What do you make of this argument?
 

14

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

For what it's worth, I agree with the broad thesis and I'm guessing that the movement currently undervalues the value of things like external reviews, epistemic spot checks, red teaming, etc. 

I do suspect getting highly competent researchers to do this review will be nontrivially difficult, in terms of opportunity costs and not just money. 

But you might be able to do something with interested (graduate) students instead; at least most of the errors I've historically been able to catch did not feel like they required particularly strong domain expertise.

That said, I think you're probably underestimating the costs of the review moderately and overestimating the amount of money moved for Founders Pledge's climate reports by a lot. I don't think it's enough to flip the sign of your conclusion however.

Comments10
Sorted by Click to highlight new comments since: Today at 1:44 AM

I hope to have time for a longer comment on Monday, but for now some quick clarifications (I work at FP and currently lead the climate work):

1. Linch's comment on FP funding is roughly right, for FP it is more that a lot of FP members do not have liquidity yet (it is a pledge on liquidity events).

2. I spent >40h engaging with the original FP Climate Report as an external expert reviewer (in 2017 and 2018, before I joined FP in 2019). There were also lots of other external experts consulted.

3. There isn't, as of now, an agreed-to-methodology on how to evaluate advocacy charities, you can't hire an expert for this. Orgs like FP, SoGive, OPP, try to evaluate methodologies to do this kind of work, but there is no single template. The approach we chose is, roughly, cluster-based reasoning and integrating lots of different considerations and models. I discuss this in  more detail here (second part of the comment).

4. As discussed in that comment, lots of expert-judgments and scientific results are reflected in the recommendation, but the ultimate cost-effectiveness estimates come from combining them. Crucially, something like a CATF recommendation, does not depend on  a single cost-effectiveness estimate that we describe as highly uncertain where ever we talk about it, but on (a) different ways to estimate cost-effectiveness, (b) most importantly, crucial considerations about the climate space and solutions (e.g. the main reason we think CATF is very high impact is because they focus on neglected solutions that can have large leverage for global decarbonization, see e.g. discussed here, with our view of what's important / effective / neglected driven by years of research + lots of conversations with experts), (c) have a long successful track record, and  (d) continuous investigations over time (> 80h this year alone) investigating CATF's current programs, funding margins, and plans.

5. Skepticism from external people on cost-effectiveness of philanthropy in climate comes, I think, in a large part from the false promises of cheap offsets that lack environmental integrity.
Because, when offsets cost < 1 USD/t they are not credible (and indeed, they aren't) whereas credible offsets are much more expensive (> 100 USD/t). So the fact that you can be much more cost-effective when you are risk-neutral and leverage several impact multipliers (advocacy, policy change, technological change, increased diffusion) is hard to explain and not intuitively plausible.




 

 

 

 


 

Hi Johannes!

I appreciate you taking the time.

"Linch's comment on FP funding is roughly right, for FP it is more that a lot of FP members do not have liquidity yet"

I see, my mistake! But is my estimate sufficiently off to overturn my conclusion?

" There were also lots of other external experts consulted." 

Great! Do you agree that it would be useful to make this public? 

"There isn't, as of now, an agreed-to-methodology on how to evaluate advocacy charities, you can't hire an expert for this." 

And the same ist true for evaluating cost-effectiveness  analyses of advocacy charities (e.g. yours on CATF)?

"So the fact that you can be much more cost-effective when you are risk-neutral and leverage several impact multipliers (advocacy, policy change, technological change, increased diffusion) is hard to explain and not intuitively plausible." 

Sure, thats what I would argue as well. Thats why its important to counter this skepticism by signalling very strongly that your research is trustworthy (e.g. through publishing expert reviews).

I'm slightly confused by the framing here. You only mention Founders Pledge, which, to me, implies you think Founders Pledge don't get external reviews but other EA orgs do.

This doesn't seem right, because Founders Pledge do ask others for reviews: they've asked me/my team at HLI to review several of their reports (StrongMinds, Actions for Happiness, psychedelics) which we've been happy to do, although we didn't necessarily get into the weeds. I assume they do this for their other reports and this is what I expect other EA orgs do too.

Hi Michael!

"You only mention Founders Pledge, which, to me, implies you think Founders Pledge don't get external reviews but other EA orgs do."

> No, I don't think this, but I should have made this clearer. I focused on FP, because I happened to know that they didn't have an external, expert review on one of their main climate-charity recommendations, CATF and because I couldn't find any report on their website about an external, expert review. 
I think my argument here holds for any other similar organisation. 

"This doesn't seem right, because Founders Pledge do ask others for reviews: they've asked me/my team at HLI to review several of their reports (StrongMinds, Actions for Happiness, psychedelics) which we've been happy to do, although we didn't necessarily get into the weeds." 

>Cool, I'm glad they are doing it! But, if you say "we didn't necessarily get into the weeds.", does it count as an independent, in-depth, expert review? If yes, great, then I think it would be good to make that public. If no, the conclusion in my question/post still holds, doesn't it?

I think my argument here holds for any other similar organisation. 
 

Gotcha

does it count as an independent, in-depth, expert review?

I mean, how long is a piece of string? :) The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn't necessarily delve into those or see if they had been interpreted correctly. 

Re making things public, that's a bit trickier than it sounds. Usually I'd leave a bunch of comments in a google doc as I went, which wouldn't be that easy for a reader to follow. You could ask someone to write a prose evaluation - basically like an academic journal review report - but that's quite a lot more effort and not something I've been asked to do.

In HLI, we have asked external academics to do that for us for a couple of pieces of work, and we recognise it's quite a big ask vs just leaving gdoc comments. The people we asked were gracious enough to do it, but they were basically doing us a favour and it's not something we could keep doing (at least with those individuals). I guess one could make them public - we've offered to share ours with donors, but none have asked to see them - but there's something a bit weird about it: it's like you're sending the message "you shouldn't take our word for it, but there's this academic who we've chosen and paid to evaluate us - take their word for it".

"The way I did my reviewing was to check the major assumptions and calculations and see if those made sense. But where a report, say, took information from academic studies, I wouldn't necessarily delve into those or see if they had been interpreted correctly. "

>> Thanks for clarifying! I wonder if it would be even better if the review was done by people outside the EA community. Maybe the sympathy of belonging to the same social group and shared, distinctive assumptions (assuming they exist), make people less likely to spot errors? This is pretty speculative, but wouldn't surprise me. 

"Re making things public, that's a bit trickier than it sounds. Usually I'd leave a bunch of comments in a google doc as I went, which wouldn't be that easy for a reader to follow. You could ask someone to write a prose evaluation - basically like an academic journal review report - but that's quite a lot more effort and not something I've been asked to do."

>> I see, interesting! This might be a silly idea, but what do you think about setting up a competition where there is a cash-prize of a few thousand dollars for the person who spots an important mistake? If you manage to attract the attention of a lot of phd students in the relevant area, you might really get a lot of competent people trying hard to find your mistakes. 

"it's like you're sending the message "you shouldn't take our word for it, but there's this academic who we've chosen and paid to evaluate us - take their word for it"."

>> Maybe that would be weird for some people. I would be surprised though if the majority of people wouldn't interpret a positive expert review as a signal that your research is trustworthy (even if its not actually a signal because you chose and paid that expert). 
 

Thanks for clarifying! I wonder if it would be even better if the review was done by people outside the EA community. Maybe the sympathy of belonging to the same social group and shared, distinctive assumptions (assuming they exist), make people less likely to spot errors? This is pretty speculative, but wouldn't surprise me

I can't immediately remember where I've seen this discussed before, but I concerned I've heard raised is that's it's quite hard to find people who (1) know enough about what you're doing to evaluate your work but (2) are not already in the EA world. 

I see, interesting! This might be a silly idea, but what do you think about setting up a competition where there is a cash-prize of a few thousand dollars for the person who spots an important mistake? If you manage to attract the attention of a lot of phd students in the relevant area, you might really get a lot of competent people trying hard to find your mistakes. 

Hmm. Well, I think you'd have to be quite a big and well funded organisation to do that. It would be a lot of management time to set up and run a competition, one which wouldn't obviously be that useful (in terms of the value of information, such a competition is more valuable the worse you think your research is). I can see organisations quite reasonably thinking this wouldn't be a good staff priority vs other things. I'd be interested to know if this has happened elsewhere and how impactful it had been. 

>> Maybe that would be weird for some people. I would be surprised though if the majority of people wouldn't interpret a positive expert review as a signal that your research is trustworthy (even if its not actually a signal because you chose and paid that expert). 

That's right. People who were suspicious of your research would be unlikely to have much confidence in the assessment of someone you paid.

I’m not sure, but given that hundreds of millions of dollars are allocated on the basis of their recommendations, it still seems worth it, doesn’t it?   

Wait is this true? How many dollars are allocated due to their recommendations last year?

I'm not sure, but according to Wikipedia, in total ~3 billion dollars have been pledged via Founders Pledge. Even if that doesn't increase and only 5% of that money is donated according to their recommendations, we are still in the ballpark of around a hundred million USD right? 

On the last question I can only guess as well. So far around 500 million USD have been donated via FoundersPledge. Founders Pledge exists for around 6 years, so on average around 85 million $ per year since it started. It seems likely to me that at least 5% have been allocated according to their recommendations, which makes an average of ~4 million USD per year. The true value is of course much higher because other people, who haven't taken the pledge, are following their recommendations as well.

I'd be interested in having someone from Founder's Pledge comment. Many EA orgs are in a position where there is a lot of dollars committed but people don't know where to give to so they hold off, hence why the EA movement as a whole has double-digit billions of dollars but only gave ~400M last year.

Curated and popular this week
Relevant opportunities