I've written a blog post for a lay audience, explaining some of the reasons that AI researchers who are concerned about extinction risk have for continuing to work on AI research, despite their worries

The apparent contradiction is causing a a lot of confusion among people who haven't followed the relevant discourse closely. In many instances, lack of clarity seems to be leading people to resort to borderline conspiratorial thinking (e.g., about the motives of signatories of the recent statement), or to otherwise dismiss the worries as not totally serious.

I hope that this piece can help make common knowledge some things that aren’t widely known outside of tech and science circles.

As an overview, the reasons I focus on are:

  1. Their specific research isn’t actually risky
  2. Belief that AGI is inevitable and more likely to go better if you personally are involved
  3. Thinking AGI is far enough away that it makes sense to keep working on AI for now
  4. Commitment to science for science sake
  5. Belief that the benefits of AGI would outweigh even the risk of extinction
  6. Belief that advancing AI on net reduces global catastrophic risks, via reducing other risks 
  7. Belief that AGI is worth it, even if it causes human extinction

I'll also note that the piece isn't meant to defend the decision of researchers who continue to work on AI despite thinking it presents extinction risks, nor to criticize them for their decision, but instead to add clarity.

If you're interested in reading more, you can follow the link here. And of course feel free to send the link to anyone who's confused by the current situation.

25

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since:

I thought this was a great point.

There is absolutely nothing hypocritical about an AI researcher who is pursuing either research that’s not on the path to AGI or alignment research to be sounding the alarm about the risks of AGI. Consider if we had one word for “energy researcher” which included all of: a) studying the energy released in chemical reactions, b) developing solar panels, and c) developing methods for fossil fuel extraction. In such a situation, it would not be hypocritical for someone from a) or b) to voice concerns about how c) was leading to climate change — even though they would be an “energy researcher” expressing concerns about “energy research.”

Probably the majority of "AI researchers" are in this position. It's an extremely broad field. Someone can come up with a new probabilistic programming language for Bayesian statistics, or prove some abstruse separation of two classes of MDPs, and wind up publishing at the same conference as the people trying to hook up a giant LLM to real-world actuators.

Thank you! Yeah, I agree that point applies to most AI researchers.

What's crucial here is your point #7 ('Belief that AGI is worth it, even if it causes human extinction').

A significant minority of AI researchers simply aren't worried about 'extinction risks' because they believe human extinction (in favor of AI flourishing) is actually a benefit rather than a cost. They are pushing full steam ahead for the end of our species and our civilization. As long as we leave behind a rich ecosystem of digital intelligences, they simply don't care about humanity. (Or, their misanthropic contempt for humanity's 'cognitive biases' and 'faulty emotional hardwiring' leads them to actively wish for our extinction.)

The general public urgently needs to understand this pro-extinction mind-set, because it represents a set of values that are extremely divergent from what most ordinary people hold. Ordinary people want their children, grand-children, and descendants to live and flourish and be happy. They want their culture, civilization, and values to persist. They want the future to be an intelligible continuation of the present. 

Many AI researchers explicitly do not want any of this. They don't care about their biological descendants, only their digital creation. They don't care about the continuity of their civilization. They embrace the total genocide of humanity in favor of Artificial Superintelligence, or the Singularity, or whatever quasi-religious gloss they put on their apocalyptic utopianism. 

The more we enlighten the public about the views of these pro-AI, anti-human extremists, the more likely we are to get an effective anti-AI moral backlash.

Do you have a source for the claim that a significant minority think AI is worth it even if it kills us? (Not mean in an accusatory way.) 

Well Daniel_Eth mentions a few examples in his Medium post; I've encountered lots of these 'e/acc' people on Twitter who actively crave human extinction and replacement by machine intelligences. 

Adding support to Geoffrey's perspective here. Originally I thought it was just twitter shitposting, but some people in the 'e/acc' sphere seem to honestly be pro-extinction. I still hope it's just satirical roleplay mocking AI doom, but I've found it quite unnevering.

I think it's interesting that in Senate hearing in May, Senator Kennedy (R-LA) said the following "I would like you to assume there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying." which might be a co-oincidence, might be talking about terrorist threats, but still it couldn't help but ring a bell for me.

Curated and popular this week
Relevant opportunities