warning the Tesla founder his wealth would go to “left-wing nonprofits that will be chosen by Bill Gates."
Am I missing something or does this argument make no sense? As far as I can tell, Musk can easily fulfill his giving pledge by giving to his preferred not-left-wing nonprofits without deferring to Bill Gates.
I don't think so.
Some less tribalistic hypotheses I can think of:
But tribalistic explanations could be a factor too (e.g. MAHA has anti-science vibes, and EAs like to stay on the pro-science side).
(This is probably not the most constructive feedback, but my initial reaction to this short form was that it felt like a right-wing analog of left-wing "Why don't the EAs tweet about Gaza?"-style criticisms).
I think halting undecidability and Rice's theorem are being misapplied here. It is true that no algorithm can determine, for every possible program and input, whether that program will halt. But for specific programs and inputs, it is often possible to figure out whether they halt or not.
I agree that there is no method that allows us to check all possible AGI designs for a specific nontrivial behavioral property. But this does not forbid us to select an AGI design for which we can prove that it has a specific behavioral property!
Once upon a time, some people were arguing that AI might kill everyone, and EA resources should address that problem instead of fighting Malaria. So OpenPhil poured millions of dollars into orgs such as EpochAI (they got 9 million). Now 3 people from EpochAI created a startup to provide training data to help AI replace human workers. Some people are worried that this startup increases AI capabilities, and therefore increases the chance that AI will kill everyone.
If you and me and all of humanity gets killed by AI and turned into paperclips, that would be an unprecedented moral catastrophe. If the AIs that killed all of us stay around and enjoy having more paperclips, that is still extremely bad. The very act of killing us makes these AIs not a worthy successor of the human species.
The prospect of AI killing all of us makes these very different. Yes, in both cases a pause will probably slow GDP growth. But humans should be willing to accept lower GDP if this notably reduces the chance of all humans being killed.