Claude's Summary:
Here are a few key points summarizing Will MacAskill's thoughts on the FTX collapse and its impact on effective altruism (EA):
Our friends estimate the cost at about $258 billion dollars to end extreme poverty for a year, and point out that this is a small portion of yearly philanthropic spending or [...]
Is that true?
Just ballparking this based on fractions of GDP given to charitable organisations (big overestimate imo), I get global giving at ~500bn/year. So I don't believe this is true
Now this is not… great, and certainly quite different from the data by tenthkrige. I'm pretty sure this isn't a bug in my implementation or due to the switch from odds to log-odds, but a deeper problem with the method of rounding for perturbation.
It's not particularly my place to discuss this, but when I replicated his plots I also found got very different results, and since then he shared his code with me and I discovered bug in it.
Less concave = more risk tolerant, no?
Argh, yes. I meant more concave.
The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don't know if it's more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?
No, it doesn't make sense. "We don't know the curvature, ergo it could be anything" is not convincing. What you seem to think is "concrete" seems entirely arbitrary to me.
As Michael Dickens notes, and as I say in the introduction, I think the post argues on balance against adopting as much financial risk tolerance as existing EA discourse tends to recommend.
I appreciate you think that, and I agree that Michael has said he agrees, but I don't understand why either of you think that. I went point-by-point through your conclusion and it seems clear to me the balance is on more risk taking. I don't see another way to convince me other than putting the arguments you put forward into each bucket, weighting them and adding them up. Then we can see if the point of disagreement is in the weights or the arguments.
Beyond an intuition-based re-weighting of the considerations,
If you think my weighings and comments about your conclusions relied a little too much on intuituion, I'll happily spell out those arguments in more detail. Let me know which ones you disagree with and I'll go into more detail.
But to my mind, the way this flattening could work is explained in the “Arguments from uncertainty” section:
I think we might be talking cross purposes here. By flattening here, I meant "less concave" - hence more risk averse. I think we agree on this point?
Could you point me to what you're referring to, when you say you note this above?
Ah - this is the problem with editing your posts. It's actually the very last point I make. (And I also made that point at much greater length in an earlier draft. Essentially the utility for any philanthropy is less downward sloping than for an individual, because you can always give to a marginal individual. I agree that you can do more funky things other EA areas, but I don't find any of the arguments convincing. For example
To my mind, one way that a within-cause philanthropic utility function could exhibit arbitrarily more curvature than a typical individual utility function is detailed in Appendix B.
I just thought this was a totally unrealistic model in multiple dimensions, and don't really think it's relevant to anything? I didn't see it as being any different from me just saying "Imagine a philanthropist with arbitrary utility function which is less more curved than an individual".
I have written a bit about this (and related topics) in the past:
I think you make a fairly good argument (in iv) about trying to maximise the probability of achieving outcome x where x could vary to being a small number, but I expect futarchy proponents would argue that you can fix this by returning E[outcome] rather than P(outcome > x). So society would vote to get the policy that maximises the expected outcome rather than the probability of an outcome. (Or you could look at P(outcome > x) for a range of x).
You wrote on reddit:
But I think none of your explanation here actually relies on this correlation. (And I think this is extremely important). I think risk-neutrality arguments are actually not the right framing. For example, a coin flip is a risky bet, but that doesn't mean the price will be less than 1/2 because there's a symmetry in whether or not you are bidding on heads or tails. It's just more likely you don't bet at all because if you are risk-neutral, you value H at 0.45 and T at 0.45.
The key difference is that if the coin flip is correlated to the real economy, such that the dollar-weighted average person would rather live in a world where heads come up than tails, they will pay more for tails than heads.