Yudkowsky claims that AI developers are plunging headlong into our research in spite of believing we are about to kill all of humanity. He says each of us continues this work because we believe the herd will just outrun us if any one of us were to stop.
The truth is nothing like this. The truth is that we do not subscribe to Yudkowsky’s doomsday predictions. We work on artificial intelligence because we believe it will have great benefits for humanity and we want to do good for humankind.
We are not the monsters that Yudkowsky makes us out to be.
Daylight Savings Time Fix: The real problem is losing the hour of sleep in the spring. The solution is to set clocks back an hour in Fall, and move them forward by 40 seconds everyday for 90 days in spring. No one is going to miss 40 seconds of sleep. Most clocks are digital and set themselves, so you don't need adjust them and you won't notice anything in spring.
You could substitute "work" where you write "fight". The latter evokes violence.
In the analogies of types of washing we should include Altruism-Washing.
The difficulty is in the name "longtermist". It asserts ownership over concern for the future.
People who disagree with the ideas that carry this banner are also concerned for the future.
The founding premise of EA was that you need to weigh evidence. This distinction is saying the longtermists have abandoned the founding premise of the movement.
As it currently stands, the AI is dependent on us for everything. We supply the power and we maintain the hardware and software. I think you ask the wrong question.
How do you propose we as humans make ourselves beneficial to an "overlord" AI so that we become indispensable?
We are already indispensable to the AI. The question should be how is it possible that we lose that?
Is this kind of replaceability compatible with current practices in Longtermism?
What is the consequence of the claim that if I fail to take an action to preserve Future People, Other Future People likely replace them?
Let's say I give money to MIRI instead of saving current people, based on some calculations of future people I might save. Are we discounting those Future People by the Other Future People? Why don't we value Other Future People just as much as Future People? Of course we do.
Perhaps that is the point of this thought experiment. Perhaps "of course you don't pull the switch" is the only right answer precisely because of replaceability.
What experiences tell you there are also lots of people in ML who do think AGI is likely to kill us all, and choose to work on advancing capabilities anyway?