The Economist, last week: "[EA] doesn’t seriously question whether attempting to quantify an experience might miss something essential about its nature. How many WALYs for a broken heart?" (Nov 2022, Source)
HLI, this week: "...Hence, an overall effect of grief per death prevented is (0.72 x 5 x 0.5) x 4.03 = 7.26 WELLBYs"
Great article – well done!!!
Choosing cause areasIn general, CE looks at cause areas where we can have a significant, legible and measurable impact.Traditionally, this has meant focusing on cause areas that within EA are commonly considered near-termist, such as animal welfare, global health and wellbeing, family planning and mental health.However, we think that there are cause areas that fall outside of this remit, and potentially that are traditionally within the long-termist space, where we could potentially find interventions that may have a significant impact and where there are concrete feedback loops. In fact, this is what prompted our research team to look into health security as a cause area.
During out intervention prioritisation researchWithin health security, there are probably some differences in the way we have defined and operationalised this as a cause area and prioritised interventions compared to others in the community:
Credit note: This answer Initially drafted by former CE staff Akhil Bansal.
Thank you Joel. Makes sense. Well done on finding these issues!
I like this post.
One key takeaway I took from it was to have more confidence in GiveWell's analysis of it's current top recommended charities. The suggested changes here mostly move numbers by 10%-30% which is significant but not huge. I do CEAs for my job and this seems pretty good to me. After reading this I feel like GiveWell's cost effectiveness analysis are OK. Not perfect but as a rough decision heuristic they probably work fine. CEAs are ultimately rough approximations at best and this is how CEAs should be / are used by GiveWell.
My suggestion to GiveWell on how they can improve, after reading this post would be: Maybe it is more valuable for GiveWell to spend their limited person hours doing rough assessments of more varied and different interventions, than perfecting their analysis of these top charities. I would be more excited to see GiveWell committing to develop very rough speculative back of the envelop style analyses of a broader range of interventions (mental health, health policy, economic growth, system change, etc) than keep improving their current analyses to perfection. (Although maybe more of both is the best plan if it is achievable.)
I think this is a sentiment that the MichaelPlant (one of the post authors) has expressed in the past, see his comment here. I would be curious to hear if the post authors have thoughts, after doing this analysis, on the value of GiveWell investing more in exploration verses more in accuracy.
If with some research you have a good chance of identifying better donation opportunities than "give to GiveWell or EA Funds", I'd be excited for you to do that and write up your results.
Interestingly I recently tried this (here). It led to some money moved but less than I hoped. I would encourage others to do the same but also to have low expectations that anyone will listen to them or care or donate differently, I don’t expect the community is that agile/ responsive to suggestions.If fact the whole experience made me much more likely to enter a donor lotteries – I now have a long list of places I expect should be funded and no where near enough funds to give so I might as well enter the lottery and see if that helps solve the problem.
I don’t follow the US pandemic policy but wasn’t some $bn (albeit much less than $30bn) still approved for pandemic preparedness and isn't more still being discussed (a very quick google points to $0.5b here and $2b here etc and I expect there is more)? If so that seems like a really significant win.Also your reply was about government, not about EA or adjacent organisations. I am not sure anyone in this post / thread has given any evidence of any "a valiant effort" yet, such as listing campaigns run or even policy papers written etc. The only post-COVID policy work I know of (in the UK, see comment below) seemed very successful and I am not sure it makes sense to update against "making the government sane" without understanding what the unsuccessful campaigns have been. (Maybe also Guarding Against Pandemics, are they doing stuff that people feel ought to have had an impact by now, and has it?)
I just wanted to share as my experience was so radically different from yours. Based in the UK during the pandemic I felt like:
I came away from the situation with a feeling that influencing policy was easy and impactful and neglected and hopefully about what policy work could achieve – but just disappointed that not more was being done to "make the government sane around biorisk".
This leads me to questions Why are our experiences so different? Some hypothesis that I have are:
Grateful for your views.
I don't know that she'd call herself an effective altruist, but if you just want someone to talk about doing effective development spending then I'm not sure that it matters...
Note to anyone still following this: I have now written up a long list of longtermist policy projects that should be funded, this gives some idea how big the space of opportunities is here: List of donation opportunities (focus: non-US longtermist policy work)
Or Ester Duflo obviously great and also sensible views on development