weeatquince

4330Joined Sep 2014

Comments
386

The Economist, last week: "[EA] doesn’t seriously question whether attempting to quantify an experience might miss something essential about its nature. How many WALYs for a broken heart?" (Nov 2022, Source)

HLI, this week: "...Hence, an overall effect of grief per death prevented is (0.72 x 5 x 0.5) x 4.03 = 7.26 WELLBYs" 

 

Great article – well done!!!

Choosing cause areas

In general, CE looks at cause areas where we can have a significant, legible and measurable impact.

Traditionally, this has meant focusing on cause areas that within EA are commonly considered near-termist, such as animal welfare, global health and wellbeing, family planning and mental health.

However, we think that there are cause areas that fall outside of this remit, and potentially that are traditionally within the long-termist space, where we could potentially find interventions that may have a significant impact and where there are concrete feedback loops. In fact, this is what prompted our research team to look into health security as a cause area.

 

During out intervention prioritisation research

Within health security, there are probably some differences in the way we have defined and operationalised this as a cause area and prioritised interventions compared to others in the community:

  1. We have taken a broader focus than GCBRs (Global Catastrophic Biological Risks), and thought about things like antimicrobial resistance, zoonotic pandemics and other biothreats which are very unlikely to have GCBR potential but which may, in expectation, be quite high priority to work on.
  2. We are probably less likely to be excited by ideas that might be more tailored solely towards tail-risk GCBR threats. This might include something like civilizational refuges, which we imagine is only useful for extinction risks. The reason that we are less excited about these ideas is not necessarily because we think that the risk of such events is low, but firstly because we think that there are unlikely to be strong ways to measure the impact that work on this area would have in the short to medium term, and secondly because this has less impact in the case of lower than extinction level risks.
  3. To operationalise our research and the cost-effectiveness estimates that we have made, we looked at the impact of our interventions using a time frame of the next 50 years. We do not think this is a perfect operationalisation, but think it is fairly useful. We are very sceptical of our ability to know what the biggest risks to the world would be after 50 years.
     

Credit note: This answer Initially drafted by former CE staff Akhil Bansal.

Thank you Joel. Makes sense. Well done on finding these issues!

I like this post.

One key takeaway I took from it was to have more confidence in GiveWell's analysis of it's current top recommended charities. The suggested changes here mostly move numbers by 10%-30% which is significant but not huge. I do CEAs for my job and this seems pretty good to me. After reading this I feel like GiveWell's cost effectiveness analysis are OK. Not perfect but as a rough decision heuristic they probably work fine. CEAs are ultimately rough approximations at best and this is how CEAs should be / are used by GiveWell.

My suggestion to GiveWell on how they can improve, after reading this post would be: Maybe it is more valuable for GiveWell to spend their limited person hours doing rough assessments of more varied and different interventions, than perfecting their analysis of these top charities. I would be more excited to see GiveWell committing to develop very rough speculative back of the envelop style analyses of a broader range of interventions (mental health, health policy, economic growth, system change, etc) than keep improving their current analyses to perfection. (Although maybe more of both is the best plan if it is achievable.)

I think this is a sentiment that the MichaelPlant (one of the post authors) has expressed in the past, see his comment here. I would be curious to hear if the post authors have thoughts, after doing this analysis, on the value of GiveWell investing more in exploration verses more in accuracy.

If with some research you have a good chance of identifying better donation opportunities than "give to GiveWell or EA Funds", I'd be excited for you to do that and write up your results. 

Interestingly I recently tried this (here). It led to some money moved but less than I hoped. I would encourage others to do the same but also to have low expectations that anyone will listen to them or care or donate differently, I don’t expect the community is that agile/ responsive to suggestions.

If fact the whole experience made me much more likely to enter a donor lotteries – I now have a long list of places I expect should be funded and no where near enough funds to give so I might as well enter the lottery and see if that helps solve the problem.

I don’t follow the US pandemic policy but wasn’t some $bn (albeit much less than $30bn) still approved for pandemic preparedness and isn't more still being discussed (a very quick google points to $0.5b here and $2b here etc and I expect there is more)? If so that seems like a really significant win.

Also your reply was about government, not about EA or adjacent organisations. I am not sure anyone in this post / thread has given any evidence of any "a valiant effort" yet, such as listing campaigns run or even policy papers written etc. The only post-COVID policy work I know of (in the UK, see comment below) seemed very successful and I am not sure it makes sense to update against "making the government sane" without understanding what the unsuccessful campaigns have been. (Maybe also Guarding Against Pandemics, are they doing stuff that people feel ought to have had an impact by now, and has it?)

I just wanted to share as my experience was so radically different from yours. Based in the UK during the pandemic  I felt like:

  • No one in was really doing anything to try to "make the government sane around biorisk". I published a paper targeted at government on managing risks. I remember at the time (in 2020) it felt like no one else was shifting to focus on policy change on lessons learned from COVID.
  • When I tried doing stuff it went super well. As mentioned here  (and here) this work went much better than expected. The government seemed willing to update and commit to being better in future.

 I came away from the situation with a feeling that influencing policy was easy and impactful and neglected and hopefully about what policy work could achieve – but just disappointed that not more was being done to  "make the government sane around biorisk".
 

This leads me to questions Why are our experiences so different? Some hypothesis that I have are:

  • Luck / randomness – maybe I was lucky or US advocates were unlucky and we should assume the truth lies in the middle.
  • Different country – the US is different, harder to influence, or less sane than some (or many) other places.
  • Different methodology – The standard policy advocacy sector really sucks, it is not evidence based and there is little M&E. It might be that advocacy run in an impact-focused way (like was happening in the UK) is just much better than funding standard advocacy organisations (which I guess was happening in the US). See discussion on this here.
  • Different amount of work – your post mentions a "valiant effort" was made, but does not evidence this. This makes it hard to form an opinion on what works and why. Would be great to get an answer to this (see Susan's comment) e.g. links to a few campaigns in this space. 

Grateful for your views.

I don't know that she'd call herself an effective altruist, but if you just want someone to talk about doing effective development spending then I'm not sure that it matters...

Note to anyone still following this: I have now written up a long list of longtermist policy projects that should be funded, this gives some idea how big the space of opportunities is here: List of donation opportunities (focus: non-US longtermist policy work) 

Load More