Great to hear about finding such a good fit, thanks for sharing!
Hi Dustin :)
FWIW I also don't particularly understand the normative appeal of democratizing funding within the EA community. It seems to me like the common normative basis for democracy would tend to argue for democratizing control of resources in a much broader way, rather than within the self-selected EA community. I think epistemic/efficiency arguments for empowering more decision-makers within EA are generally more persuasive, but wouldn't necessarily look like "democracy" per se and might look more like more regranting, forecasting tournaments, etc.
Just wanted to say that I thought this post was very interesting and I was grateful to read it.
Just wanted to comment to say I thought this was very well done, nice work! I agree with Charles that replication work like this seems valuable and under-supplied.
I enjoyed the book and recommend it to others!
In case of of interest to EA forum folks, I wrote a long tweet thread with more substance on what I learned from it and remaining questions I have here: https://twitter.com/albrgr/status/1559570635390562305
Thanks MHR. I agree that one shouldn't need to insist on statistical significance, but if GiveWell thinks that the actual expected effect is ~12% of the MK result, then I think if you're updating on a similarly-to-MK-powered trial, you're almost to the point of updating on a coinflip because of how underpowered you are to detect the expected effect.
I agree it would be useful to do this in a more formal bayesian framework which accurately characterizes the GW priors. It wouldn't surprise me if one of the conclusions was that I'm misinterpreting GiveWell's current views, or that it's hard to articulate a formal prior that gets you from the MK results to GiveWell's current views.
Thanks, appreciate it! I sympathize with this for some definition of low FWIW: "I have an intuition that low VSLs are a problem and we shouldn't respect them" but I think it's just a question of what the relevant "low" is.
Thanks Karthik. I think we might be talking past each other a bit, but replying in order on your first four replies:
Hey Karthik, starting separate thread for a different issue. I opened your main spreadsheet for the first time, and I'm not positive but I think the 90% reduction claim is due to a spreadsheet error? The utility gain in B5 that flows through to your bottom line takeaway is hardcoded as being in log terms, but if eta changes than the utility gain to $s at the global average should change (and by the way I think it would really matter if you were denominating in units of global average, global median, or global poverty level). In this copy I made a change to reimplement isoelastic utility in B7 and B8. In this version, when eta=1.00001, OP ROI is 169, and when eta=1.5, OP ROI is 130, for a difference of ~25% rather than 90%. I didn't really follow what was happening in the rest of the sheet so it's possible this is wrong or misguided or implemented incorrectly.
Thanks for the thoughtful post, I really appreciate it!
Open Phil has thought some about arguments for higher eta but as far as I can find never written them up, so I'll go through some of the relevant arguments in my mind:
On your 36% adjustment within the log framework: I don't think our estimates for this are accurate to anything like 36%; I'd be happy if they turn out to be within a factor of 2-3x. So I find it easy to believe you could be right here. But I think your changes come from a period when inequality increased substantially, to a historically unusual level, and I would be surprised if it made sense to predict a continuation of that increasing trend indefinitely over the relevant horizon for Tom's model (many decades to centuries).
More broadly, I agree that the gains from redistribution can be substantial and I think our work reflects that (e.g., our Global Aid Policy program).