I think there is a huge difference between:
If you were hired by an EA org as paid staff, you only get credit for the % that you are better than the next possible hire.
On the other hand, if you have a normal job and donate, all the donations are counterfactual.
Similarly, if you do unpaid work the bar is much lower, and would be something akin to "are the coordination costs worth it?".
BTW, Doing earn to give is still valuable(if you can donate over $50,000 a year).
This is still an extremely high bar.
Hi,
I care instrumentally, because it may impact negative on the welfare of other people.
And yes, I agree that this is totally compatible with rejecting egalitarianism and prioritarianism, but it's not so obvious.
I was trying to illustrate why I think many people endorse some sort of egalitarianism and have thoughts like "inequality bad", which are easy to confuse with "inequality intrinsically bad".
I have another intuition for egalitarianism: the distribution of power.
Most resources in our world can be traded for influence/power, such as money, time and materials.
Therefore, in the real scenarios that guide our intuitions, inequality is associated with concentration of power.
To put it in a charicature example: I don't care if TechnoBro 3000 celebrates his birthday in the asteroid belt with his 10^30 gold plated robot friends, but I do care if he can buy the elections of Democratistan.
This is not a rebuttal of the narrow definition of egalitarianism, but is close enough to work as an intuition pump if we are not being very theoretical.
Maybe because P(doom) ranges from 10% to 99% excludes many people that state a lower P(doom) or refuse to state a number.
Maybe because Will MacAskill sits at 10–20%, calling himself "optimistic today" — but notes this is among the lowest estimates in serious circles implies people that state a lower number are not serious.
Those were my reasons to think of downvoting. In the end I didn't do it because at the time the post was already in the negatives.
There is a huge selection bias coming into play here, where people that appear in AI safety podcasts or use the expression P(doom) have self-selected for higher numbers than people that don't, and this is not addressed in the post.
I think the devil is in the details. The principles are fine but what really matters is the operationalization. Hard to tell without more information about the program.