Executive Director at One for the World; board member at High Impact Athletes.
Does the relative amount of evidence and uncertainty affect your thinking at all? I have heard indirectly of people working in longtermism who donate to neartermist causes because they think it hedges the very large uncertainties of longtermism (both longtermist work and donations). As you say, the neartermist donation options recommended by EA benefit from very robust evidence, observable feedback loops, tried-and-tested organisations etc., and that could be a good hedge if you're working in an area of much higher uncertainty.
Very interesting, thanks. I read this as more saying 'we need to be prepared to back unlikely but potentially impactful things', and acknowledging the uncertainty in longtermism, rather than saying 'we don't think expected value is a good heuristic for giving out grants', but I'm not confident in that reading. Probably reflects my personal framing more than anything else.
Like you, I'm fairly relaxed about asking people publicly to be transparent. Specifically in this context, though, someone from FTX said they would be open to doing this if the idea was popular, which prompted the post.As a sidenote, I think also that MEL consultancies are adept at understanding context quickly and would be a good option (or something that EA could found itself - see Rossa's comment). My wife is an MEL consultant, which informs my view of this. But that's not to say they are necessarily the best option.
Absolutely. And so the questions are:
have we defined that ROI threshold?
what is it?
are we building ways to learn by doing into these programmes?
The discussions on post suggest that it's at least plausible that the answers are 'no', 'anything that seems plausibly good' and 'no', which I think would be concerning for most people, irrespective of where you sit on the various debates/continuums within EA.
I like this.
I'm not sure I agree with you that I find it equally worrying as moving so fast that we break too many things, but it's a good point to raise. On a practical level, I partly wrote this because FTX is likely to have a lull after their first grant round where they could invest in transparency.
I also think a concern is what seems to be such an enormous double standard. The argument above could easily be used to justify spending aggressively in global health or animal welfare (where, notably, we have already done a serious, serious amount of research and found amazing donation options; and, as you point out, the need is acute and immediate). Instead, it seems like it might be 'don't spend money on anything below 5x GiveDirectly' in one area, and the spaghetti-wall approach in another.
Out of interest, did you read the post as emotional? I was aiming for brevity and directness but didn't/don't feel emotional about it. Kind of the opposite, actually - I feel like this could help to make us more factually aligned and less driven by emotional reactions to things that might seem like 'boondoggles'.
Thanks - I missed that update, and wouldn't have written about CEA above if I had seen it, I think.
Indeed :-) I had understood from this post (https://forum.effectivealtruism.org/posts/FjDpyJNnzK8teSu4J/) that this was the destination, though, so the current rate of spending would be less relevant than having good heuristics before we get to that scale.
I see from Max below, though, that Open Phil is assuming a lot of this spending, so sorry for throwing a grenade at CEA if you're not actually going to be behind a really 'move the needle' amount of campus spending.
Indeed - and to be clear, I wasn't trying to suggest that you shouldn't have made the comment - just that it's very secondary to the substance of the post, and so I was hoping the meat of the discussion would provoke the most engagement.
This would be great. It also closely aligns with what EA expects before and after giving large funding in most cause areas.
I'm not sure why the burden wouldn't fall on people making the distribution of funds? (Incidentally, I'm using this to mean that the funders could also hire external consultancies etc. to produce this.)
But, more to the point, I wrote this really hoping that both organisations would say "sure, here it is" and we could go from there. That might really have helped bring people together. (NB: I realise FTX haven't engaged with this yet.)
In many ways, if the outcome is that there isn't a clear/shared/approved expected value rationale being used internally to guide a given set of spending, that seems to validate some of the concerns that were expressed at EAG.