100% agreed with everything you said here. We've thought through some of these scenarios you brought up but didn't want to get too bogged down in more complicated estimates in the main post. Our more in depth estimates might place it closer to 1000 breaking even, perhaps a few thousand to be very safe. Happy to discuss more in depth, but as you say it becomes less relevant once the numbers get larger.
Interesting idea! We haven't built that in yet, but I think we could build a feature that would add up your donations throughout the year and track your projected impact, but wait until Giving Tuesday to actually disburse the funds (in a way that would enable the match).
As Nick said, it would be wonderful to see follow-up studies here that try to flesh out these different aspects. We don't think we're covering everything in EA (although the description Nick posted below is from effectivealtruism.org, so it seemed like a decent first attempt). But that certainly seems correct, you could have very different answers to "who likes extreme altruism", "who likes AI safety", etc.
The community question is particularly interesting one because it might be more of a historical artifact than a necessary trait of ... (read more)
Thanks Siebe - while I certainly agree that we don't take the most extreme form of effective altruism, I don't think it's actually as focused on narrow Effective Giving as you suggest. We used that language in the original write up because we wanted it to be accessible to a non EA audience. But if you look at the language of the actual description (Nick posted it above), we took that from effectivealtruism.org, and it actually focuses pretty broadly on trying to do good, not just on donating.
But as we mention, I think this is just the tip of the iceberg, I... (read more)
I really like the approach behind this post - too often EAs are hesitant to think about ways we can make use of our own psychology for pursuing altruism. It appears to some EAs that tricks like donating to a cause area (to avoid identifying too strongly in opposition to it) should not be part of a rationalist's toolkit. But accepting that we are all biased, and doing what we can to overcome those biases in favor of what we would rationally, reflectively endorse as the unbiased viewpoint, can only help us increase our effectiveness in pursuing our altruistic goals.
Very nice - I've had people ask me before how to make a charity more effective, and it's always been somewhat uncomfortable to have to say that EA focuses more attention on evaluating the existing effectiveness of charities than on trying to help charities to become more effective. But this is one step better than just helping existing charities to become more effective, this is creating effective charities from the ground up. Bravo.
This is terrific - thanks for taking steps to make this a reality! Excited to see what wonderful things come out of the people who are staying there.
I'd agree with being hesitant to distinguish definitions of EA for "academic" and "outreach" purposes. It seems like that's asking for someone to use the wrong definition in the wrong context.
It could also be useful to specify a few other things about the question, such as whether charities saving future lives are legitimate to include in the calculation and whether the language about helping the world's poorest people was specifically intending to restrict the set to global poverty charities.