I’m not necessarily disputing the idea that donating to these sorts of fundraising organizations is a good use of money; but we also need to be careful about double-counting. It’s tempting to try to take credit for one’s own meta donations while object-level donors are also taking full credit for the programs they fund.
My practice, perhaps adjacent but not identical to the one proposed here, is to give 15% of a donation to the charity evaluator or facilitator that introduced me to the main charity or program. In recent years that’s been GiveWell, and the fact that they have an excess funds regranting policy makes this an even easier decision.
was asking for randomized control trials (or other methods) to demonstrate effectiveness really shockingly revolutionary
EA didn’t invent RCTs, or even popularize them within the social sciences, but their introduction was indeed a major change in thinking. Abhijit Banerjee, Esther Duflo and Michael Kremer won the Nobel prize in economics largely for demonstrating the experimental approach to the study of development.
Speaking for myself, the main reason I don't get involved in AI stuff is because I feel clueless about what the correct action might be (and how valuable it might be, in expectation). I think there is a pretty strong argument that EA involvement in AI risk has made things worse, not better, and I wouldn't want to make things even worse.
Maybe just bet on v-dem, or regimes of the world? There is already one market for that: https://manifold.markets/Siebe/if-trump-is-elected-will-the-us-sti?play=true