skluug

Topic Contributions

Comments

AI Risk is like Terminator; Stop Saying it's Not

I don't think this is a good characterization of e.g. Kelsey's preference for her Philip Morris analogy over the Terminator analogy--does rogue Philip Morris sound like a far harder problem to solve than rogue Skynet? Not to me, which is why it seems to me much more motivated by not wanting to sound science-fiction-y. Same as Dylan's piece; it doesn't seem to be saying "AI risk is a much harder problem than implied by the Terminator films", except insofar as it misrepresents the Terminator films as involving evil humans intentionally making evil AI.

It seems to me like the proper explanatory path is "Like Terminator?" -> "Basically" -> "So why not just not give AI nuclear launch codes?" -> "There are a lot of other ways AI could take over". 

"Like Terminator?" -> "No, like Philip Morris" seems liable to confuse the audience about the very basic details of the issue, because Philip Morris didn't take over the world. 

AI Risk is like Terminator; Stop Saying it's Not

I feel like this is a pretty insignificant objection, because it implies someone might going around thinking, "don't worry, AI Risk is just like Terminator! all we'll have to do is bring humanity back from the brink of extinction, fighting amongst the rubble of civilization after a nuclear holocaust".  Surely if people think the threat is only as bad as Terminator, that's plenty to get them to care. 

[$20K In Prizes] AI Safety Arguments Competition

“Perhaps the best window into what those working on AI really believe [about existential risks from AI] comes from the 2016 survey of leading AI researchers. As well as asking if and when AGI might be developed, it asked about the risks: 70 percent of the researchers agreed with Stuart Russell’s broad argument about why advanced AI might pose a risk; 48 percent thought society should prioritize AI safety research more (only 12 percent thought less). And half the respondents estimated that the probability of the longterm impact of AGI being “extremely bad (e.g., human extinction)” was at least 5 percent. I find this last point particularly remarkable—in how many other fields would the typical leading researcher think there is a one in twenty chance the field’s ultimate goal would be extremely bad for humanity?”

  • Toby Ord, The Precipice
AI Risk is like Terminator; Stop Saying it's Not

Thanks for reading—you’re definitely right, my claim about the representativeness of Yudkowsky & Christiano’s views was wrong. I had only a narrow segment of the field in mind when I wrote this post. Thank you for conducting this very informative survey.

AI Risk is like Terminator; Stop Saying it's Not

Thanks for reading! I admire that you take the time to respond to critiques even by random internet strangers. Thank you for all your hard work in promoting effective altruist ideas.

AI Risk is like Terminator; Stop Saying it's Not

Yeah, you're right actually, that paragraph is a little too idealistic.

As a practical measure, I think it cuts both ways. Some people will hear "yes, like Terminator" and roll their eyes. Some people will hear "no, not like Terminator", get bored, and tune out. Embracing the comparison is helpful, in part, because it lets you quickly establish the stakes. The best path is probably somewhere in the middle, and dependent on the audience and context.

Overall I think it's just about finding that balance.

On presenting the case for AI risk

fwiw my friend said he recently explained AI risk to his mom, and her response was "yeah, that makes sense."

AI Risk is like Terminator; Stop Saying it's Not

Wow, this is a really interesting point that I was not aware of.

AI Risk is like Terminator; Stop Saying it's Not

I think these have more to do with how some people remember Terminator than with Terminator itself:

  • As I stated in this post, the AI in Terminator is not malevolent; it attacks humanity out of self-preservation.
  • Whether the AIs are conscious is not explored in the movies, although we do get shots from the Terminator's perspective, and Skynet is described as "self-aware". Most people have a pretty loose understanding of what "consciousness" means anyway, not being far off from "general intelligence".
  • Cyberdyne Systems is not portrayed as greedy, at least in the first two films. As soon as the head of research is told about the future consequences of his actions in Terminator 2, he teams up with the heroes to destroy the whole project. No one else at the company tries stop them or is even a character, apart from some unlucky security guards.
  • The android objection has the most legs. But the film does state that most humans were not killed by robots, but by the nuclear war initiated by Skynet. If Terminator comparisons are embraced, it should be emphasized that an AI could find many different routes to world domination.

I would also contend that 2 & 3 don't count as thought terminating. AGI very well could be conscious, and in real life, corporations are greedy. 

AI Risk is like Terminator; Stop Saying it's Not

I would like to thank N.N., Voxette, tyuuyookoobung & TCP for reviewing drafts of this post. 

I rewatched Terminator 1 & 2 to write this post. One thing I liked but couldn't fit in: Terminator 2 contains an example of the value specification problem! Young John Connor makes the good Terminator swear not to kill people; the Terminator immediately goes on to merely maim severely, instead. 

Load More