Cause prioritization
Cause prioritization
Identifying and comparing promising focus areas for doing good

Quick takes

48
2mo
13
Often people post cost-effectiveness analyses of potential interventions, which invariably conclude that the intervention could rival GiveWell's top charities. (I'm guilty of this too!) But this happens with such frequency, and I am basically never convinced that the intervention is actually competitive with GWTC. The reason is that they are comparing ex-ante cost-effectiveness (where you make a bunch of assumptions about costs, program delivery mechanisms, etc) with GiveWell's calculated ex-post cost-effectiveness (where the intervention is already delivered, so there are much fewer assumptions). Usually, people acknowledge that ex-ante cost-effectiveness is less reliable than ex-post cost-effectiveness. But I haven't seen any acknowledgement that this systematically overestimates cost-effectiveness, because people who are motivated to try and pursue an intervention are going to be optimistic about unknown factors. Also, many costs are "unknown unknowns" that you might only discover after implementing the project, so leaving them out underestimates costs. (Also, the planning fallacy in general.) And I haven't seen any discussion of how large the gap between these estimates could be. I think it could be orders of magnitude, just because costs are in the denominator of a benefit-cost ratio, so uncertainty in costs can have huge effects on cost-effectiveness. One straightforward way to estimate this gap is to redo a GiveWell CEA, but assuming that you were setting up a charity to deliver that intervention for the first time. If GiveWell's ex-post estimate is X and your ex-ante estimate is K*X for the same intervention, then we would conclude that ex-ante cost-effectiveness is K times too optimistic, and deflate ex-ante estimates by a factor of K. I might try to do this myself, but I don't have any experience with CEAs, and would welcome someone else doing it.
35
4mo
5
[edit: a day after posting, I think this perhaps reads more combative that I intended? It was meant to be more 'crisis of faith, looking for reassurance if it exists' than 'dunk on those crazy longtermists'. I'll leave the quick take as-is, but maybe clarification of my intentions might be useful to others] Warning! Hot Take! 🔥🔥🔥 (Also v rambly and not rigorous) A creeping thought has entered my head recently that I haven't been able to get rid of...  The EA move toward AI Safety and Longtermism is often based on EV calculations that show the long term future is overwhelmingly valuable, and thus is the intervention that is most cost-effective. However, more in-depth looks at the EV of x-risk prevention (1, 2) cast significant doubt on those EV calculations, which might make longtermist interventions much less cost-effective than the most effective "neartermist" ones. But my doubts get worse... GiveWell estimates around $5k to save a life. So I went looking for some longtermist calculations, and I really couldn't fund any robust ones![1] Can anyone point me in some robust calculations for longtermist funds/organisations where they go 'yep, under our assumptions and data, our interventions are at least competitive with top Global Health charities'? Because it seems to me like that hasn't been done. But if we're being EA, people with high and intractable p(doom) from AI shouldn't work on AI, they should probably EtG for Global Health instead (if we're going to be full maximise EV about everything). Like, if we're taking EA seriously, shouldn't MIRI shut down all AI operations and become a Global Health org? Wouldn't that be a strong +EV move given their pessimistic assessments of reducing xRisk and their knowledge of +EV global health interventions? But it gets worse... Suppose that we go, 'ok, let's take EV estimates seriously but not literally'. In which case fine, but that undermines the whole 'longtermist interventions overwhelmingly dominate EV' move t
48
7mo
22
The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I'm not sure exactly why I felt this way, but here are a few ideas. * (High certainty) HLI have openly published their research and ideas, posted almost everything on the forum and engaged deeply with criticism which is amazing - more than perhaps any other org I have seen. This may (uncertain) have hurt them more than it has helped them. * (High certainty) When other orgs are criticised or asked questions, they often don't reply at all, or get surprisingly little criticism for what I and many EAs might consider poor epistemics and defensiveness in their posts (for charity I'm not going to link to the handful I can think of). Why does HLI get such a hard time while others get a pass? Especially when HLI's funding is less than many of orgs that have not been scrutinised as much. * (Low certainty) The degree of scrutiny and analysis of some development orgs in general like HLI seems to exceed that of AI orgs, Funding orgs and Community building orgs. This scrutiny has been intense- more than one amazing statistician has picked apart their analysis. This expert-level scrutiny is fantastic, I just wish it could be applied to other orgs as well. Very few EA orgs (at least that have been posted on the forum) produce full papers with publishable level deep statistical analysis like HLI have at least attempted to do. Does there need to be a "scrutiny rebalancing" of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less. Other orgs might see threads like the HLI funding thread hammering and compare it with other threads where orgs are criticised and don't eng
28
5mo
Radar speed signs currently seem like one of the more cost effective traffic calming measures since they don't require roadwork, but they still surprisingly cost thousands of dollars. Mass producing cheaper radar speed signs seems like a tractable public health initiative
28
7mo
Surprised Animal Charity Evaluators Recommended Charity Fund gives equal amounts to around a dozen charities: https://animalcharityevaluators.org/donation-advice/recommended-charity-fund/ Obviously uncertainty's involved, but a core tenant of EA and charity evaluators is that certain charities are more effective, so Givewell's Top Charities Fund giving different amounts to only a few charities per year makes more sense to me: https://www.givewell.org/top-charities-fund
16
6mo
8
One reason I'm excited about work on lead exposure is that it hits a sweet spot of meaningfully benefiting both humans and nonhumans. Lead has dramatic and detrimental effects for not just mammals, but basically all animals, from birds to aquatic animals to insects. Are there other interventions that potentially likewise hit this sweet spot?
5
2mo
Since they were well received last year, I'm going to be hosting the EA London Quarterly Review Coworking sessions again for 2024.  You can register here: Q1 FY24 session sign up; Q2 FY24 session sign up; Q3 FY24 session sign up; Q4 FY24 session sign up Thanks to Rishane for making this poster and to LEAH for hosting us. 
22
1y
3
Offput that 80k hours advises "if you find you aren’t interested in [The Precipice: Existential Risk], we probably aren’t the best people for you to get advice from". Hoped there was more general advising beyond just those interested in existential risk
Load more (8/24)