1257 karmaJoined Sep 2021


Sorry, I edited while you posted. I see US as 1.44% * 27tn = ~400bn, which is the vast majority of charitable giving when I add in the rest of the countries Wikipedia lists and interpolate based on location for other biggish economies

Our friends estimate the cost at about $258 billion dollars to end extreme poverty for a year, and point out that this is a small portion of yearly philanthropic spending or [...]

Is that true?

Just ballparking this based on fractions of GDP given to charitable organisations (big overestimate imo), I get global giving at ~500bn/year. So I don't believe this is true

Now this is not… great, and certainly quite different from the data by tenthkrige. I'm pretty sure this isn't a bug in my implementation or due to the switch from odds to log-odds, but a deeper problem with the method of rounding for perturbation.

It's not particularly my place to discuss this, but when I replicated his plots I also found got very different results, and since then he shared his code with me and I discovered bug in it.

Basically it simulates the possible outcomes of all the other bets you have open.

How can I do that without knowing my probabilities for all the other bets? (Or have I missed something on how it works?)

Less concave = more risk tolerant, no?

Argh, yes. I meant more concave.

The point of this section is that since there are no good public estimates of the curvature of the philanthropic utility function for many top EA cause areas, like x-risk reduction, we don't know if it's more or less concave than a typical individual utility function. Appendix B just illustrates a bit more concretely how it could go either way. Does that make sense?

No, it doesn't make sense. "We don't know the curvature, ergo it could be anything" is not convincing. What you seem to think is "concrete" seems entirely arbitrary to me.

As Michael Dickens notes, and as I say in the introduction, I think the post argues on balance against adopting as much financial risk tolerance as existing EA discourse tends to recommend.

I appreciate you think that, and I agree that Michael has said he agrees, but I don't understand why either of you think that. I went point-by-point through your conclusion and it seems clear to me the balance is on more risk taking. I don't see another way to convince me other than putting the arguments you put forward into each bucket, weighting them and adding them up. Then we can see if the point of disagreement is in the weights or the arguments.

Beyond an intuition-based re-weighting of the considerations,

If you think my weighings and comments about your conclusions relied a little too much on intuituion, I'll happily spell out those arguments in more detail. Let me know which ones you disagree with and I'll go into more detail.

But to my mind, the way this flattening could work is explained in the “Arguments from uncertainty” section:

I think we might be talking cross purposes here. By flattening here, I meant "less concave" - hence more risk averse. I think we agree on this point?

Could you point me to what you're referring to, when you say you note this above?

Ah - this is the problem with editing your posts. It's actually the very last point I make. (And I also made that point at much greater length in an earlier draft. Essentially the utility for any philanthropy is less downward sloping than for an individual, because you can always give to a marginal individual. I agree that you can do more funky things other EA areas, but I don't find any of the arguments convincing. For example

To my mind, one way that a within-cause philanthropic utility function could exhibit arbitrarily more curvature than a typical individual utility function is detailed in Appendix B.

I just thought this was a totally unrealistic model in multiple dimensions, and don't really think it's relevant to anything? I didn't see it as being any different from me just saying "Imagine a philanthropist with arbitrary utility function which is less more curved than an individual".

but these arguments are not as strong as people claim, so we shouldn't say EAs should have high risk tolerance

I don't get the same impression from reading the post especially in light of the conclusions, which even without my adjustments seems in favour of taking more risk.

Yes, I thought your comment was great!

Can you elaborate on why you believe this? Are you talking specifically about global poverty interventions, or (EA-relevant) philanthropy in general? (I can see the case for global poverty[1], I'm not so sure about other causes.)

I was mosty thinking global poverty and health yes. I think it's still probably true for other EA-relevant philanthropy, but I don't think I can claim that here.

I'm also not clear on why you believe this, can you explain? (FWIW the claim in the parenthetical is probably false: on a cross-section across countries, GDP growth and equity return are negatively correlated, see Ritter (2012), "Is Economic Growth Good for Investors?")

1/ I don't think that study is particularly relevant? That's making a statement about the correlation between countries growth rates and the returns on their stock markets.

2/ I don't think there's really a study which is going to convince me either way on this. My reasoning is more about my model of the world:

a/ Economic actors will demand a higher risk premium (ie stocks down) when they are more uncertain about the future because the economy in the future is less bright (ie weak economy => lower earnings)

b/ Economic actors will demand a higher risk premium when their confidence is lower

I don't think there's likely to be a simple way to extract a historical correlation, because it's not clear how forward looking you want to be estimating, what time horizon is relevant etc. I think if you think that stocks are negatively correlated to NGDP AND you think they have a risk premium, you want to be loading up on them to absolutely unreasonable levels of leverage.

Just in the first few lines, we have a nitpick about the grammar of the title

I actually think this is substantial more than a nitpick. I doubt people are reading the whole of a 61(!) minute article and spot that the article doesn't support the title.

I'll grant the second point, I found critiquing this article extremely difficult and frustrating due to the structure and I think the EA forum would be much better if people wrote shorter articles, and it disappoints me that people seem to upvote without reading

Load more