Cross-posted from my blog.
Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small.
Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%.
That is not how most nonprofit work feels to me.
You are only ever making small dents in important problems
I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems.
Consider what else my $500 CrossFit scholarship might do:
* I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed.
* I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Why would Knightian uncertainty be an argument against AI as an existential risk? If anything, our deep uncertainty about the possible outcomes of AI should lead us to be even more careful.
The section "International Game Theory" does not seem to me like an argument against AI as an existential risk.
If the USA and China decide to have a non-cooperative AI race, my sense is that this would increase existential risk rather than reduce it.
Yep, I think this is true. The point is that, given AI stays aligned which is stated there, the best thing for a country to do would be to accelerate capabilities. You’re right, however, that its not an argument against AI being an existential threat (I’ll make a note to make this more clear) — it’s more a point for acceleration.