December graduation from Purdue, aiming to be a congressional staffer afterward.
Great post!
Seems like a case where marginal thinking strictly dominates the abolitionist case. I'd imagine many more people could get on board with doing more to stop child smoking addiction and reducing the consumption of the most harmful tobacco products.Â
I'd expect the less controversial interventions to be more tractable and therefore impactful as well. Why not aim at that?Â
Instance of Eliezer X-Risk Communication #47
"Imagine there was a grasshopper, and then a bumblebee. And imagine the grasshopper was 120 IQ in grasshopper-normalized intelligence. Then imagine a millionaire sycophant (grown, not built) that the grasshopper trusts pushes a TEN-THOUSAND POUND Diamondoid Bacteria off of a skyscraper--AND EVERYONE DIES".
Epistemic Status: Joke
In theory, this seems important and worth considering. Another effect that might pull in the opposite direction:
As we learn more about effective causes we are able to identify more effective solutions/issue areas.Â
It's not obvious which effect (or something else) will dominate. One way we might be able to acertain the answer to this is to look at the effectiveness of Givewell's top charities across time. My understanding is this hasn't moved much, but also that their definitions of "life saved" has changed across time. Unsure which direction that might affect things.
I don't think I have a good objection here.Â
1) You could make an objection about value drift and this should influence you to donate now, but I don't think this gets to the heart of the issue.
2) If now is the "hinge of history", maybe it is a uniquely good time to do longtermist philanthropy.
However, if we believe neartermist work is pressing enough to justify funding as well, it seems like patient philanthropy is pretty much a pareto improvemnt over normal neartermist philanthropy.Â
Would any justification for neartermist philanthropy change this?
This seems quite correct! There quite a few open questions in my mind.
1) What is the chance Anthropic EA's either aren't interested in donating or will spread their donations out across significant time? If they are EA, it seems unlikely that they will struggle to understand the "donation timing" case.Â
2) What percent of the IPO do we expect to be donated? To which cause areas?
3) What is our estimate for how logarithmic the utility functions of common EA charities are?Â
4) Isn't AI just predicting the next word? Why would Anthropic be able to make any money from this?