Cross-posted from my blog.
Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small.
Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%.
That is not how most nonprofit work feels to me.
You are only ever making small dents in important problems
I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems.
Consider what else my $500 CrossFit scholarship might do:
* I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed.
* I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Is the future good in expectation? Thoughts on Will MacAskill's most recent 80k Hours podcast
My full summary of this podcast is on my website here. Below are my thoughts on the question posed above - whether the future is good in expectation.
Why Will thinks the future looks good
Will thinks the future trajectory looks good. He mainly relies on an asymmetry between altruism and sadism in reaching this conclusion: some altruistic agents will systematically pursue things that are good, but very, very few sadistic agents will systematically pursue things that are bad.
Will therefore believes there’s a strong asymmetry where the very best possible futures are somewhat plausible, but the very worst possible futures are not. He accepts that it’s entirely plausible that we squander our potential and bring about a society that’s not the very best, but he finds it much, much less plausible that we bring about the truly worst society.
My thoughts
I am highly uncertain on this point and, while I have not thought about it as much as Will seems to have, I found his reasoning unpersuasive. In particular:
To the people who have disagreed with this comment - I would be interested to learn why you disagree, if you care to share. What am I missing or getting wrong?
I upvoted but disagreed. I have a rosier view of plausible future worlds where people are as selfish as they are now, just smarter. They'd be coordinating better, and be more wisely selfish, which means they'd benefit the world more in order to benefit from trade. I admit it could go either way, however. If they just selfishly want factory-farmed meat, and the torture is just a byproduct.
I realise that this view doesn't go against what you say at all, so I retract my disagreement.
(I should mention that the best comments are always the ones that are upvoted but disagreed with, since those tend to be the most informative or most needed. ^^)
Thanks for the explanation. I agree it's possible that smarter people could coordinate better and produce better outcomes for the world. I did recognise in my original post that a factor suggesting the future could be better was that, as people get richer and have their basic needs met, it's easier to become altruistic. I find that argument very plausible; it was the asymmetry one I found unconvincing.
FWIW, I'm fine with others disagreeing with my view. It would be great to find out I'm wrong and that there is more evidence to suggest the future is rosier in expectation than I had originally thought. I just wanted people to let me know if there was a logical error or something in my original post, so thank you for taking the time to explain your thinking (and for retracting your disagreement on further consideration).
I think it's healthy to be happy about being in disagreement with other EAs about something. Either that means you can outperform them, or it means you're misunderstanding something. But if you believed the same thing, then you for sure aren't outperforming them. : )
I think the future depends to a large extent on what people in control of extremely powerfwl AI ends up doing with it, conditional on humanity surviving the transition to that era. We should probably speculate on what we would want those people to do, and try to prepare authoritative and legible documents that such people will be motivated to read.