There's a background belief that informs a lot of my Effective Altruism thinking, that might be a good time to challenge:

I think most of the value of most earning-to-give is primarily a sort of costly signaling to attract the attention of the extremely rich (who completely dwarf the funding capabilities of the bulk of EA donors), *or* in donating to places that are for various reasons can use smaller amounts of startup money. (Either you have good reason to think they're useful that the current super-rich don't, funding smaller scale experiments, etc)

(This comes with the caveat that, say, getting Elon Musk's attention isn't obviously net positive because he may or may not have actually understood what Superintelligence was warning about)

This doesn't mean that earning to give isn't important, but it changes a bit about what sorts of earning to give are most important and why.

The main argument I've seen that points in a different direction is the notion that having all of your funding come from a few super-rich people makes you much more beholden to them, which can warp your choices. I think even in light of this I still believe the above, but maybe I should weight it differently.

This has informed how I participated in a few different discussions, but I haven't had a discussion directly examining this belief. I'm curious about people's thoughts.




Sorted by Click to highlight new comments since: Today at 7:43 AM

The funding for meta-EA still seems to potentially be a bottle-neck in the short term. This is because there are many people who already care about the concrete issues like poverty and animal rights and want to give their money away to something that will have an impact. Even existential risk has people like Musk and Tallinn funding them. On the other hand, meta seems to only be funded by Good Ventures and only to a limited extent. If you believe that this is important, then earn-to-give may be an effective strategy.

I think costly signaling is the wrong phrase here. Costly signaling is about gain for the signaler. This seems better modeled as people trying to indirectly purchase the good "rich people donate lots to charity.". Similar to people who are unwilling to donate to the government (so they don't think the government is better at spending money than they are) but do advocate for higher taxes (meaning they think the government is better at spending money than other people are). They're trying to purchase the good "higher taxes for everyone".

Maybe, but the thing I'm trying to get at here is "a bunch of people saying that rich people should donate to X" is a less credible signal than "a bunch of people saying X thing is important enough that they are willing to donate to it themselves."

Seems like it's suggesting it as costly signalling at the level of the movement rather than the individuals. It's a stretch from normal use, but that's kind of the strength of analogies?