I've written a blog post for a lay audience, explaining some of the reasons that AI researchers who are concerned about extinction risk have for continuing to work on AI research, despite their worries
The apparent contradiction is causing a a lot of confusion among people who haven't followed the relevant discourse closely. In many instances, lack of clarity seems to be leading people to resort to borderline conspiratorial thinking (e.g., about the motives of signatories of the recent statement), or to otherwise dismiss the worries as not totally serious.
I hope that this piece can help make common knowledge some things that aren’t widely known outside of tech and science circles.
As an overview, the reasons I focus on are:
- Their specific research isn’t actually risky
- Belief that AGI is inevitable and more likely to go better if you personally are involved
- Thinking AGI is far enough away that it makes sense to keep working on AI for now
- Commitment to science for science sake
- Belief that the benefits of AGI would outweigh even the risk of extinction
- Belief that advancing AI on net reduces global catastrophic risks, via reducing other risks
- Belief that AGI is worth it, even if it causes human extinction
I'll also note that the piece isn't meant to defend the decision of researchers who continue to work on AI despite thinking it presents extinction risks, nor to criticize them for their decision, but instead to add clarity.
If you're interested in reading more, you can follow the link here. And of course feel free to send the link to anyone who's confused by the current situation.
Adding support to Geoffrey's perspective here. Originally I thought it was just twitter shitposting, but some people in the 'e/acc' sphere seem to honestly be pro-extinction. I still hope it's just satirical roleplay mocking AI doom, but I've found it quite unnevering.
I think it's interesting that in Senate hearing in May, Senator Kennedy (R-LA) said the following "I would like you to assume there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying." which might be a co-oincidence, might be talking about terrorist threats, but still it couldn't help but ring a bell for me.