F

freedomandutility

4845 karmaJoined Apr 2021

Comments
491

Topic contributions
7

I don't think the third question is a good faith question. 

This is the context for how Wenar used the phrase: "And he’s accountable to the people there—in the way all of us are accountable to the real, flesh-and-blood humans we love.""

I interpret this as "direct interaction with individuals you are helping ensures accountability, i.e, they have a mechanism to object to and stop what you are doing". This contrasts with aid programs delivered by hierarchical organisations where locals cannot interact with decision makers, so cannot effectively oppose programs they do not want, eg - the deworming incident where parents were angry.

"If I accepted every claim in his piece, I’d come away with the belief that some EA charities are bad in a bunch of random ways, but believe nothing that imperils my core belief in the goodness of the effective altruism movement or, indeed, in the charities that Wenar critiques."

I agree - but I think Wenar does a very good job of pointing out specific weaknesses. If he alternatively framed this piece as "how EA should improve" (which is how I mentally steelman every EA hit-piece that I read), it would be an excellent piece. Under his current framing of "EA bad", I think it is a very unsuccessful piece.

I think these are his very good and perceptive criticisms:

  1. Global health and development EA does not adequately account for side-effects, unintended consequences and perverse incentives caused by different interventions in its expected-value calculations, and does not adequately advertise these risks to potential donors. Weirdly, I don't think I've come across this criticism of EA before despite it seeming very obvious. I think this might be because people are polarised between "aid bad" and "aid good", leaving very few people saying "aid good overall but you should be transparent about downsides of interventions you are supporting".
  2. The use of quantitative impact estimates by EAs can mislead audiences into overestimating the quality of quantitative empirical evidence supporting these estimates.
  3. Expected-value calculations rooted in probabilities derived from belief (as opposed to probabilities derived from empirical evidence) are prone to motivated reasoning and self-serving biases.  

I've previously discussed weaknesses of expected-value calculations on the forum and have suggested some actionable tools to improve them.

I think Givewell should definitely clarify what they think the most likely negative side-effects and risks of the programs they recommend are, and how severe they think the side-effects are.

This is great, thank you for doing this hard work!

A couple of disagreements:

"I think it’s important for many to realise the importance of other players and funding sources in the landscape. This could mean many more funding opportunities EAs are systematically neglecting." 

My view is that many players and funding sources means that fewer important funding opportunities will be missed.

"I was struck by how little philanthropy has been directed towards tech development for biosecurity, mitigating GCBRs, and policy advocacy for a range of topics from regulating dual-use research of concern (DURC) to mitigating risks from bioweapons."

I 100% agree regarding policy advocacy, but I disagree regarding tech development and mitigating GCBRs for reasons you do mention - that many different interventions, including vaccine R&D and broad public health systems strengthening in LMICs, contribute to mitigating GCBRs. 

My sense is that there is a lot of impact to be made from just convincing US foundations to donate to charities abroad, which is probably more tractable than selling EA as an entire concept, and is still very compatible with TBP.

(In my opinion they are basically correct about TBP and EA being incompatible!)

Interesting post!

I'm a big fan of both progress studies and effective altruism / international development.

I think we may disagree on the size of the trade-offs when it comes to drawing philanthropic funding to these areas. I think there is heavy overlap between the intellectual circles of progress studies and effective altruism, so most of the investment going into one approach is trading off directly against investment in the other approach.

I also think how progress studies aims to achieve American economic growth is very important. Some approaches to growth, eg - re-industrialisation in the West, are more likely to trade-off against growth in LMICs. Other approaches, like focusing on increasing innovation, liberalising immigration and deregulating housing are less likely to do this (and the first-two have obvious and direct spillover benefits to LMICs).

There's also the moral question around equality. If you value the distribution of utility (eg - you are prioritarian or something similar), you may think that the international development approach is more desirable because it may be more effective at reducing inequality.

It's also worth thinking about political considerations - I think there is a non-trivial risk that a Trump government turns way from internationalism, and the potential spillover benefits of American growth via larger aid budgets become much smaller. I don't think there are similar obvious risks with the international development approach. This may be a case for ensuring that the progress studies movement works to maintain cross-partisan consensus on foreign aid, alongside cross-partisan work on science policy, immigration, etc.  

(Btw, I love how short, clear and concise your post is!)

Why in the policy world, given the current size of the movement, EAs should narrowly focus on foreign policy and science policy 

Just fund community health workers - the case for why EA underestimates the cost-effectiveness

A vision for wild animal welfare - lab-grown meat, population control via contraceptives - what successful wild animal welfare interventions could look like, hundreds of years from now

Urgency in global health - In defence of short-term, band-aid fixes

Against deference in EA, and problems with inteprreting consensus in fields where deference is common

Load more