A2

alamo 2914

74 karmaJoined Sep 2022

Comments
15

Interesting. I'm sorry to hear that the system is so fucked up. I really hope you'll be able to improve it.

I agree with the 2 commenters below. I wouldn't trust him very much on climate change, but the CCC's work on GHD is of a different nature. The CCC has many researchers, of whom Bjørn is only one. I would also like to add - if he can get Nobel Prize winning economists Vernon Smith, Tom Schelling, Finn Kydland and Douglass North to work with him, multiple times each, that's a good indicator that the CCC's research is good.  

I agree with your points about R&D and E-procurement, (and some are mentioned in the report), thanks for your input. 

It's really cool that your wife works in land tenure! The philosophical framework I have in mind for land tenure reminds me of the one for other estimates. As Scott Alexander put it - IF IT’S WORTH DOING, IT’S WORTH DOING WITH MADE-UP STATISTICS. Essentially, it's better to at least have some information in your land registration system, even if not very accurate, than none. What do you think about this?

As for education, I don't know.  

It's true that he never identified as an EA. I mean EA in the sense of using reason and evidence to perform cost-effectiveness among different causes, and choosing the best, regardless of what comes up. These are IMO the core characteristics of an EA, everything else is a bonus.

Also, the first occurrences of something will usually be different than what it will develop to be. Was Hippocrates a licensed MD? Did he rely on evidence-based medicine? Did Galileo ever do a PhD in physics? 

I would rate him decently on truth-seeking. He used the world's best economists in his think-tank, multiple times over. It would surely have been easier to invite less esteemed economists. You might think he did that only to raise the status of the CCC (and by extension himself), but that's too cynical in my opinion.  

I might've used too strong of a language with my original post, such as the talk about being a sucker. For me it's useful to think about donations as a product I'm buying, but I probably took it too far. And I don't think I've properly emphasized my main message, which was (as I've added later) - the explore-exploit tradeoff for causes is really hard if you don't know how far exploration could take you. Honestly, I'm most interested in your take on that. I initially only used GiveWell and CEARCH to demonstrate that argument and show I how got to it.    

The drug analogy is interesting, although I prefer the start-up analogy. Drug development is more binary - some drugs can just flat-out fail in humans, while start-ups are more of a spectrum (the ROI might be smaller than thought etc.). I don't see a reason to think of CEARCH recommended programs or for most other exploratory stuff as binary. Of course lobbying could flat-out fail, but it's unlikely we'll have to update our beliefs that this charity would NEVER work, as might happen in drug development. And obviously with start-ups, there's also a lot of difference between the initial market research and the later stages (as you said).  

GiveWell has a lot of flaws for cause exploration. They really focus on charity research, not cause research. It's by design really biased towards existing causes and charities. The charities must be interested and cooperate with GiveWell. They look for the track record, i.e. charities operating in high-risk, low tractibillity areas such as policy have a harder time. In most cases it makes sense, sometimes it can miss great opportunities. 

Yes, they've funded some policy focused charities, but they might've funded much more if they were more EV maximizing instead of risk-averse. Seeing the huge leverage such options provide, it's entirely possible. 

Also, they aren't always efficient - look at GiveDirectly. Their bar for top charities was 10x GiveDirectly for years, yet they kept GiveDirectly as a top charity until last year??? This is not some small, hard to notice inefficiency. It literally is their consistent criteria for their flagship charities. Can you imagine a for-profit company telling their investors "well, we believe these other channels have a ROI of at least 10x, but please also consider investing in this channel with x ROI", for multiple years? I can't. Let alone putting that less efficient channel as one of the best investments…

That's exactly what I mean when I say altruism, even EA, can have gross inefficiency in allocations. It's not special to GiveWell, I'm just exemplifying. 

If GiveWell can make such gross mistakes, then probably others can. Another example was their relative lack of research on family planning, which I've written about. They're doing A LOT of great things too. But I must say I am a bit skeptical of their decision making sometimes. 

Keep in mind, CEARCH would have to be EXTREMELY optimistic in order for us to say that it hasn't found a couple of causes 10x GiveWell. We are talking about 40x optimistic. That might be the case, but IMO it’s a strong enough assertion to require proof. Do you have examples of something close to 40x optimism in cost-effectiveness? 

I agree that a lot of the difference in EAs donations can come from differing perspectives, probably most. But I think even some utilitarian, EV maximizing, 0-future discount, animal equalists EAs donate to different causes (or any other set of shared beliefs). It's definitely not impossible. 

As for other examples of 10x GiveWell cost-effectiveness in global health: 

  • CE has estimated another charity yields $5.62 per DALY.
  • An Israeli non-profit, which produced an estimate of 4.3$ QALYs per dollar, in cooperation with EA Israel. A volunteer said to me he believed they were about 3x too optimistic, but that’s still around 10x GiveWell. 

Also here is an example of 4x disagreement between GiveWell and Founders Pledge, and an even bigger disagreement with RP, on a mass media campaign for family planning. Even the best in the business can disagree. 

Sorry for this being a bit of a rave
 

Thanks for the reply!

If I understand your main arguments correctly, you're basically saying that high cost-effectiveness options are rare, uncertain, and have a relatively small funding gap that is likely to be closed anyways. Also new charities are likely to fail, and can be less effective. And smart EAs won't waste their money. 

Uncertainty and rarity: Assume that CEARCH is on average 5x too optimistic about their high confidence level report, 20x too optimistic about their low confidence stuff (that's A LOT). Still, out of 17 causes they researched, 4 are still over 10x effective as top GiveWell charities. Almost 1/4th. They were probably lucky - Rethink Priorities and CE and such don't have such a high rate (it would be an interesting topic for analysis). But still, their budgets are miniscule. RP has spent around 14m$ in it's entire lifetime. CEARCH is composed of only 2 full-time workers, and was founded less than 2 years ago. CE had a total income of £775k in 2022. The cost of operations for these stuff is tiny, compared to the amount we spend on direct work.  

Small funding gap, likely to be closed anyways: Let's say that on average, finding such causes requires 5m$ (seems overblown with the aforementioned info). And assume these causes are on average 20x effective as top GiveWell charities. And the funding gap is indeed small - only 10m$ on average. That's 15m$, that would've done as much good as 200m$ for GiveWell. So by finding and donating to 2.6 causes/y, we can equal the impact GiveWell has done in 2021. Those funding gaps aren't that likely to be closed - it took more then 10 years after the inception of EA for CEARCH to find those causes. In the stock market, a 2% misprice may be closed within a few hours. In altruism, a 500% misallocation will never be closed without deliberate challenge. 

And these causes pretty easy to find. CEARCH has been started in 2022 and has already found 4 causes 10x GiveWell under my aforementioned pessimistic assumptions. CE and RP have found more. There are big funding gaps, because there are many causes like this. There are many big world governments to do lobbying to. We should aim to close the funding gaps as soon as possible, because that would help more people. 

New charities likely to fail, and be less effective: CE's great work shows that might not be true. A substantial number of their charities report significant success. Also, I assume that's taken into account in exploratory research. It can still diminish the impact by 50% and it won't matter to the overall scheme. 

 EAs won't waste their money on bad donations: If that was true, then all EAs seeking to maximize expected value would roughly agree on where to donate their money. Rather, we see the community being split into 4 main parts (global H&P, animals, existential risk, meta). Some people in EA simply don't and won't donate to some of these parts. This shows that at least a part of the community might donate to worse charities. 

Imagine you have 2 investments that you will return your money only in 10 years.

  1. a safe investment that would return you Y. 
  2. a risky start-up that you expect to return 10Y in EV. 

What would you choose? I bet the start-up. With altruism there's no reason to be loss averse, so the logic is even more solid.

I guess my are that we should spend more on cause prioritization and supporting new charities (akin to CE). But then - when do we know we've found a decent cause? The exploration-exploitation trade-off is impossible if you don't know how far exploration will take you. 

EA is the smartest, most open community I know. I'm sure it will explore this. 

It's an interesting point, but they're just reviewing the evidence... 

A better exercise to not fall into self-deception is 'mental contrasting', in which you first think about achieving your goals, and then about the obstacles that stand in your way and how to overcome them. It might also help in goal achievement, especially in combination with a technique called 'implementation intention'.[1]

  1. ^

    Wang G, Wang Y and Gai X (2021) A Meta-Analysis of the Effects of Mental Contrasting With Implementation Intentions on Goal Attainment. Front. Psychol. 12:565202. doi: 10.3389/fpsyg.2021.565202

I agree, it seems like a very good idea to post it here. I assume it's seen by dozen times more people than in your team. Also, the EA forum is definitely less biased about any thing than organizations that work for that thing. Just the way humans work. 

Load more